CN114353807A - Robot positioning method and positioning device - Google Patents
Robot positioning method and positioning device Download PDFInfo
- Publication number
- CN114353807A CN114353807A CN202210274358.8A CN202210274358A CN114353807A CN 114353807 A CN114353807 A CN 114353807A CN 202210274358 A CN202210274358 A CN 202210274358A CN 114353807 A CN114353807 A CN 114353807A
- Authority
- CN
- China
- Prior art keywords
- robot
- point cloud
- cloud data
- parking area
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000007500 overflow downdraw method Methods 0.000 claims description 47
- 230000003068 static effect Effects 0.000 claims description 34
- 230000002159 abnormal effect Effects 0.000 claims description 28
- 230000015572 biosynthetic process Effects 0.000 claims description 17
- 238000007689 inspection Methods 0.000 claims description 17
- 238000003786 synthesis reaction Methods 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 9
- 230000007613 environmental effect Effects 0.000 claims description 8
- 238000003062 neural network model Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 206010036436 Posture abnormal Diseases 0.000 claims description 4
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 125000004122 cyclic group Chemical group 0.000 claims description 3
- 238000004590 computer program Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Aviation & Aerospace Engineering (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The application provides a positioning method and a positioning device of a robot, which are used for acquiring contour point cloud data acquired by the robot in a parking area of an underground parking lot according to the current pose; the parking area comprises at least one elevator entrance and a plurality of columns; calculating the matching degree between the acquired outline point cloud data and the reference point cloud data corresponding to the electronic map of the parking area; if the matching degree is smaller than a preset matching threshold value, calculating a difference value between each contour point cloud data and the reference point cloud data; adjusting the current pose of the robot according to the difference; and if the matching degree is not less than the preset matching threshold, determining the pose of the robot corresponding to the matching degree reaching the preset matching threshold as the target pose of the robot in the parking area. The matching degree of the current position of the robot in the electronic map can be calculated through automatic circulation, the current position of the robot is matched with the electronic map in real time to carry out accurate positioning, and the positioning accuracy of the robot is improved.
Description
Technical Field
The application relates to the technical field of positioning, in particular to a positioning method and a positioning device of a robot.
Background
Along with the development of the society, the robot plays more and more important roles in links such as carrying, routing inspection and the like. An existing Autonomous Mobile Robot (AMR) can detect a surrounding environment by using sensors such as a camera and a scanner, and selects an optimal path to reach a target point, so that the AMR Robot has high flexibility.
In the related art, limited to the variability of the work environment, for example, when the robot is applied to the inspection of an underground parking lot, the position of an object recognized by the robot is constantly changed due to the change of the stream of people and the entrance of other vehicles. Because an approximate position needs to be manually given during the initial positioning of the robot, and then the position scanned by the robot is matched with an electronic map of an underground parking lot by rotating the robot, the initial positioning needs to be manually carried out because the robot cannot be automatically matched with an accurate position, and the position given manually is fuzzy, so that the positioning error is large.
Disclosure of Invention
In view of the above, an object of the present application is to provide a positioning method and a positioning apparatus for a robot, in which the robot can calculate the matching degree of its current position in an electronic map through automatic loop, and perform real-time matching with the electronic map to perform accurate positioning, thereby improving the positioning accuracy of the robot.
In a first aspect, an embodiment of the present application provides a positioning method for a robot, where the positioning method includes:
s110, acquiring contour point cloud data acquired by the robot in a parking area of the underground parking lot according to the current pose; the parking area comprises at least one elevator entrance and a plurality of columns;
s120, calculating the matching degree between the acquired outline point cloud data and reference point cloud data corresponding to the electronic map of the parking area;
s130, if the matching degree is smaller than a preset matching threshold, calculating a difference value between each contour point cloud data and the reference point cloud data;
s140, adjusting the current pose of the robot according to the difference value, and returning to execute the step S110;
s150, if the matching degree is not smaller than a preset matching threshold, determining the pose of the robot corresponding to the matching degree reaching the preset matching threshold as the target pose of the robot in the parking area.
Optionally, a panoramic camera is mounted on the robot; step S110 includes:
s1110, respectively determining the position coordinates of each target object according to the image information of a plurality of target objects in the parking area shot by the panoramic camera arranged on the robot; wherein the plurality of target objects include an elevator entrance and a plurality of columns in the parking area;
s1120, performing coordinate conversion processing on the position coordinate of each target object to obtain a reference coordinate of each target object in the electronic map of the parking area, and determining a reference object corresponding to the reference coordinate of each target object in the electronic map from the electronic map;
s1130, aiming at each target object, calculating the shape similarity between the target object and a reference object corresponding to the reference coordinate of the target object in the electronic map;
s1140, if the calculated shape similarity is smaller than a first similarity threshold, identifying contour pixels of each target object from the image information of the target object, adjusting a target pixel group to the contour pixels of the target object to change the shape of the target object, wherein the target pixel group comprises 50 pixels nearest to all the contour pixels of the target object, and returning to execute step S1130;
s1150, if the calculated shape similarity is greater than or equal to a first similarity threshold, the contour pixel points of the target object are not adjusted;
s1160, determining point cloud data corresponding to the contour pixel points of the target objects as contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose.
Optionally, a first distance measuring sensor is installed at a first position of the robot, a second distance measuring sensor is installed at a second position of the robot, and the first position is different from the second position; step S110 further includes:
acquiring first scanning data obtained by scanning the parking area for one circle by using a first ranging sensor according to the current pose of the robot;
determining a first point cloud data set corresponding to the contour pixel point of each target object from the first scanning data;
comparing the first point cloud data set corresponding to the determined contour pixel point of each target object with a preset point cloud set of the target object; the preset point cloud set comprises point cloud data corresponding to contour pixel points of target objects in the electronic map;
if the comparison result is consistent, the second distance measuring sensor is not started, and the first point cloud data set corresponding to the contour pixel point of each target object is determined as the point cloud data set to be processed;
if the comparison result is inconsistent, starting a second distance measuring sensor to work, scanning the parking area for one circle by using the second distance measuring sensor to obtain second scanning data, determining a second point cloud data set corresponding to the contour pixel point of each target object from the second scanning data, clustering the point cloud data in the first point cloud data set and the second point cloud data set to obtain a clustered point cloud data set, and determining the clustered point cloud data set as a point cloud data set to be processed;
and determining all point cloud data in the point cloud data set to be processed as contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose.
Optionally, a third distance measuring sensor is arranged on the robot; step S110 further includes:
acquiring third scanning data obtained by scanning the parking area for one circle by using the third ranging sensor according to the current pose of the robot;
determining a third point cloud data set corresponding to the contour pixel point of each target object from the third scanning data;
inputting the third point cloud data set into a pre-trained shape synthesis model to obtain the shape of a synthesis object;
calculating the shape similarity between the synthetic object and a target object corresponding to a third point cloud data set corresponding to the synthetic object;
if the calculated shape similarity is larger than a second similarity threshold value, determining point cloud data included in the third point cloud data set as contour point cloud data acquired by the robot in a parking area of an underground parking lot according to the current pose;
wherein the shape synthesis model is trained by:
acquiring scanning data samples of a plurality of target objects to be detected in a parking area of an underground parking lot and an actual shape of each target object to be detected;
and inputting the point cloud data set sample corresponding to the contour pixel point of each target object to be tested and the actual shape of the target object to be tested into a pre-constructed neural network model for training until the similarity between the shape of the synthetic object output by the neural network model and the actual shape of the target object to be tested is greater than a second similarity threshold value, and obtaining a trained shape synthetic model.
Optionally, step S120 includes:
calculating the distance between the acquired contour point cloud data corresponding to the contour pixel point and the reference point cloud data corresponding to the electronic map of the parking area aiming at each contour pixel point corresponding to each target object;
counting the target number of the contour pixel points with the calculated distance smaller than the distance threshold;
calculating the ratio of the number of the targets to the total number of all contour pixel points of the target object;
setting a first weight for the calculated ratio, and setting a second weight for the calculated shape similarity between the target object and a reference object corresponding to the reference coordinate of the target object in the electronic map;
and determining the matching degree between the contour point cloud data and the reference point cloud data based on the ratio and the corresponding first weight thereof, and the shape similarity and the corresponding second weight thereof.
Optionally, in step S140, the step of adjusting the current pose of the robot according to the difference includes:
traversing reference pixel points of all reference objects in the electronic map of the parking area;
acquiring reference point cloud data corresponding to reference pixel points with gray values larger than preset gray values and contour point cloud data corresponding to contour pixel points of each target object;
and adjusting the course direction of the robot along the direction of the parking area where the target object corresponding to the minimum difference value in all the difference values is located according to the difference value between the reference point cloud data and the corresponding contour point cloud data.
Optionally, the electronic map of the parking area is determined by:
acquiring a preset panoramic map of an underground parking lot, wherein the panoramic map comprises position coordinates of all reference objects in a map coordinate system;
moving a predetermined number of times around the parking area at an arbitrary position of the parking area using a robot;
identifying a stationary object in the parking area using a sensor on the robot, resulting in a relative position between the robot and the stationary object; the static object is an object which is in a static state in the process that the robot moves for a preset number of times;
determining a position identifier of a parking area where the robot is located according to the position of the robot in the parking area of the underground parking lot; the position mark comprises an area shape formed by an elevator entrance characteristic and any three upright post characteristics arranged around the elevator entrance;
dividing the panoramic map into a plurality of sub-area maps according to the characteristics of the elevator entrance in advance; each of the sub-area maps contains an elevator landing feature;
according to the position identification, determining a target sub-area map comprising the area shape from the plurality of sub-area maps;
according to the relative position between the robot and the static object and the target sub-area map, creating an electronic map which is centered by the robot and comprises the static object in the target sub-area map; wherein the stationary object comprises the target object.
Optionally, the positioning method further includes:
in the process of controlling the robot to inspect in the parking area according to the target pose, acquiring environmental image data and laser point cloud data of the robot aiming at the parking area and walking pose data of the robot; the robot comprises a robot body, a panoramic camera, a sensor, a laser point cloud data acquisition module, a walking attitude data acquisition module, a panoramic camera, a robot and a robot, wherein the environmental image data is acquired by the panoramic camera, the laser point cloud data acquisition module, the walking attitude data is acquired by the sensor, and the walking attitude data is acquired by the sensor mounted on the robot; wherein the sensor comprises a first ranging sensor and a second ranging sensor, or the sensor comprises a third ranging sensor;
fusing the environment image data, the laser point cloud data and the walking attitude data to obtain a patrol state characteristic value of the robot;
inputting the inspection state characteristic value into an abnormal information detection model preset on the robot to obtain abnormal condition information of the parking area; the abnormal condition information comprises road surface abnormal information, environment abnormal information and walking posture abnormal information;
and sending the obtained abnormal condition information to a server so that the server sends an alarm instruction generated based on the abnormal condition information to a target user side, wherein the target user side is a user connected with the robot.
Optionally, the step of fusing the environment image data, the laser point cloud data and the walking posture data to obtain the inspection state characteristic value of the robot includes:
according to a preset first fusion method for a static object and a preset second fusion method for a non-static object, an adaptive target fusion method is distributed for the currently scanned object, and a fused inspection state characteristic value is obtained; the target fusion method is a first fusion method or a second fusion method; the first fusion method is a decision-level fusion method which is jointly decided by environment image data and laser point cloud data, and the second fusion method is a characteristic-level fusion method which mainly uses walking attitude data;
determining that the currently scanned object is a stationary object by:
periodically extracting position information corresponding to each scanned object from the environment image data, and judging whether the position information corresponding to each scanned object is changed or not;
and if the position information corresponding to the scanned object does not change within a preset time period, determining that the currently scanned object is a static object.
In a second aspect, an embodiment of the present application further provides a positioning device for a robot, where the positioning device includes:
the data acquisition module is used for acquiring contour point cloud data acquired by the robot in a parking area of the underground parking lot according to the current pose; the parking area comprises at least one elevator entrance and a plurality of columns;
the matching degree calculation module is used for calculating the matching degree between the acquired outline point cloud data and the reference point cloud data corresponding to the electronic map of the parking area;
the difference value calculation module is used for calculating the difference value between each contour point cloud data and the reference point cloud data if the matching degree is smaller than a preset matching threshold value;
the cyclic execution module is used for adjusting the current pose of the robot according to the difference value, and returning to the execution step to acquire contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose;
and the pose determining module is used for determining the pose of the robot corresponding to the matching degree reaching the preset matching threshold as the target pose of the robot in the parking area if the matching degree is not less than the preset matching threshold.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the robot positioning method as described above.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the positioning method for a robot as described above.
The positioning method and the positioning device for the robot provided by the embodiment of the application comprise the steps of acquiring contour point cloud data acquired by the robot in a parking area of an underground parking lot according to the current pose; the parking area comprises at least one elevator entrance and a plurality of columns; calculating the matching degree between the acquired outline point cloud data and the reference point cloud data corresponding to the electronic map of the parking area; if the matching degree is smaller than a preset matching threshold value, calculating a difference value between each contour point cloud data and the reference point cloud data; adjusting the current pose of the robot according to the difference, and returning to the execution step to acquire contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose; and if the matching degree is not less than the preset matching threshold, determining the pose of the robot corresponding to the matching degree reaching the preset matching threshold as the target pose of the robot in the parking area.
Compared with the method that an approximate position is manually set during initial positioning of the robot and then the robot is rotated to enable the position scanned by the robot to be matched with the electronic map of the underground parking lot in the prior art, the robot can calculate the matching degree of the current position of the robot in the electronic map through automatic circulation and perform real-time matching with the electronic map to perform accurate positioning, and the positioning accuracy of the robot is improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart of a positioning method for a robot according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a process of acquiring contour point cloud data by using a panoramic camera according to an embodiment of the present disclosure;
FIG. 3 is a schematic illustration of a robot being positioned in a parking area using a prior art positioning method;
FIG. 4 is a schematic illustration of a robot being positioned in a parking area using the positioning method of the present application;
fig. 5 is a schematic structural diagram of a positioning device of a robot according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. Every other embodiment that can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present application falls within the protection scope of the present application.
First, an application scenario to which the present application is applicable will be described. The application can be applied to the field of robot positioning, is limited by the variability of working environments in the related art, and is exemplary to be applied to the inspection of underground parking lots, and the positions of objects recognized by the robot are changed continuously due to the change of people streams and the driving-in of other vehicles. Because an approximate position needs to be manually given during the initial positioning of the robot, and then the position scanned by the robot is matched with an electronic map of an underground parking lot by rotating the robot, the initial positioning needs to be manually carried out because the robot cannot be automatically matched with an accurate position, and the position given manually is fuzzy, so that the positioning error is large.
Based on this, the embodiment of the application provides a positioning method of a robot, so that the matching degree of the current position of the robot in an electronic map is automatically and circularly calculated, the robot is matched with the electronic map in real time to perform accurate positioning, and the positioning accuracy of the robot is improved.
Referring to fig. 1, fig. 1 is a flowchart illustrating a positioning method of a robot according to an embodiment of the present disclosure. As shown in fig. 1, a positioning method provided in an embodiment of the present application includes:
s110, acquiring contour point cloud data acquired by the robot in a parking area of the underground parking lot according to the current pose; the parking area includes at least one elevator shaft and a plurality of columns.
S120, calculating the matching degree between the acquired outline point cloud data and reference point cloud data corresponding to the electronic map of the parking area;
s130, if the matching degree is smaller than a preset matching threshold, calculating a difference value between each contour point cloud data and the reference point cloud data;
s140, adjusting the current pose of the robot according to the difference value, and returning to execute the step S110;
s150, if the matching degree is not smaller than a preset matching threshold, determining the pose of the robot corresponding to the matching degree reaching the preset matching threshold as the target pose of the robot in the parking area.
The above steps are exemplified below with reference to specific embodiments.
In step S110, the current pose is an initial pose in which the robot is placed in the parking area, including the position coordinates and the heading angle of the current position of the robot in the parking area. The parking area selects an area comprising at least one elevator opening and a plurality of stand columns in the underground parking lot, and the elevator opening is usually occupied by people, so that the environment is special, objects of the robot can be changed continuously during positioning and scanning, and certain errors are easily caused in positioning of the robot. In addition, the following scheme of the embodiment of the present application is adopted in the robot positioning to improve the positioning accuracy of the robot in consideration of the problem.
In the embodiment of the application, the current pose of the robot in the parking area of the underground parking lot can be determined through the following steps:
traversing reference pixel points of all reference objects in the electronic map corresponding to the parking area; acquiring reference point cloud data corresponding to all reference pixel points with gray values larger than a preset gray value and contour point cloud data corresponding to contour pixel points of each target object in a parking area; according to the difference value between the reference point cloud data of the reference object and the corresponding contour point cloud data of the target object, determining the reference pixel point corresponding to the difference value smaller than a preset difference value threshold value as a pre-matched reference pixel point preliminarily matched with the contour pixel point; carrying out pseudo-corresponding relation removal processing and nonlinear local optimization processing on the pre-matched reference pixel points in sequence to obtain a reference pixel point which is optimally matched with the contour pixel point of each target object; determining the best matched reference pixel point as a target reference pixel point of the target object in the electronic map to obtain reference point cloud data, quaternion and GPS corresponding to the target reference pixel point; calculating the initial position coordinate of the robot in an initial coordinate system by combining quaternions corresponding to the target reference pixel points and a GPS; the initial coordinate system is established by a GPS corresponding to the selected first target reference pixel point; the initial coordinate system is converted into a map coordinate system through pretreatment, so that the initial position coordinate of the robot is converted into a position coordinate under the map coordinate system; the preprocessing comprises the steps of translating along the Z axis of an initial coordinate system for a first preset distance, anticlockwise rotating around the Y axis of the initial coordinate system for a first angle, translating along the X axis of the initial coordinate system for a second preset distance, anticlockwise rotating around the Y axis of the initial coordinate system for a second angle, and anticlockwise rotating around the Z axis of the initial coordinate system for a third angle; and determining the position coordinate of the robot in the map coordinate system and the initial course angle of the robot as the current pose of the robot in the electronic map of the parking area. Here, the robot is a two-dimensional robot. The first angle, the second angle and the third angle are all different.
Illustratively, the target objects in the parking area may include an elevator hall and a plurality of columns in the parking area.
In the embodiment of the application, taking the robot as an example of a parking area applied to an underground parking lot, the parking area includes at least one elevator entrance and a plurality of columns, since pedestrians often pass through the elevator entrance, when the pedestrians pass through the elevator entrance, the robot can also scan the pedestrians, but the pedestrians are constantly moving, and if the pedestrians which are constantly moving are also determined as target objects, the positioning of the robot is easily affected, so that only static objects, such as the elevator entrance, the columns and the like, are determined as references for positioning of the robot.
Specifically, in step S110, when contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose is acquired, the following various ways may be adopted:
firstly, a first distance measuring sensor is arranged at a first position of the robot, a second distance measuring sensor is arranged at a second position of the robot, and the first position is different from the second position; specifically, step S110 includes:
acquiring first scanning data obtained by scanning the parking area for one circle by using a first ranging sensor according to the current pose of the robot; determining a first point cloud data set corresponding to the contour pixel point of each target object from the first scanning data; comparing the first point cloud data set corresponding to the determined contour pixel point of each target object with a preset point cloud set of the target object; the preset point cloud set comprises point cloud data corresponding to contour pixel points of target objects in the electronic map; if the comparison result is consistent, the second distance measuring sensor is not started, and the first point cloud data set corresponding to the contour pixel point of each target object is determined as the point cloud data set to be processed; if the comparison result is inconsistent, starting a second distance measuring sensor to work, scanning the parking area for one circle by using the second distance measuring sensor to obtain second scanning data, determining a second point cloud data set corresponding to the contour pixel point of each target object from the second scanning data, clustering the point cloud data in the first point cloud data set and the second point cloud data set to obtain a clustered point cloud data set, and determining the clustered point cloud data set as a point cloud data set to be processed; and determining all point cloud data in the point cloud data set to be processed as contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose.
In this step, can all set up distance measuring sensor at the top position of robot and waist position, set up distance measuring sensor in two positions, can avoid one of them distance measuring sensor to lead to discerning complete image because the sheltering from of other objects. The waist position of the robot refers to a position corresponding to half of the height of the robot. By using the first ranging sensor and the second ranging sensor, the object scanned by the first ranging sensor and the object scanned by the second ranging sensor can be ensured to be superposed and fused.
Here, the first point cloud data set corresponding to the determined contour pixel point of each target object is compared with the preset point cloud set of the target object, the comparison result may be a coordinate difference value between the corresponding point cloud data, and when the coordinate difference value is within a preset difference value range, that is, the coordinate difference value is not large, the comparison result may be considered to be consistent.
For example, whether the target object identified by the first ranging sensor is complete or not may be determined, the comparison may be performed according to a complete image of the target object pre-stored by a server connected to the first ranging sensor, and in the specific implementation, some feature identifiers may be extracted from the identified target object, and then the object image corresponding to the feature identifiers may be searched in a database of the server.
Specifically, the point cloud data in the first point cloud data set and the point cloud data in the second point cloud data set are clustered to obtain a clustered point cloud data set. Only one point cloud data is reserved, redundant point cloud data are deleted, and different point cloud data are clustered to obtain a clustered point cloud data set. The clustered point cloud data set is used for representing the contour point cloud data of the target object, so that the contour point cloud data of the target object can be clearer and more comprehensive, and the problem of low acquisition precision caused by shielding and other problems when only one distance measuring sensor is used for scanning is prevented.
For example, linear superposition of image pixels may be performed on contour pixel points corresponding to the target object identified by the second distance measurement sensor and the target object identified by the first distance measurement sensor, so as to obtain combined pixel points to be determined, and obtain a point cloud data set to be processed corresponding to the combined pixel points to be determined.
Secondly, a third distance measuring sensor is arranged on the robot; step S110 includes:
acquiring third scanning data obtained by scanning the parking area for one circle by using the third ranging sensor according to the current pose of the robot; determining a third point cloud data set corresponding to the contour pixel point of each target object from the third scanning data; inputting the third point cloud data set into a pre-trained shape synthesis model to obtain the shape of a synthesis object; calculating the shape similarity between the synthetic object and a target object corresponding to a third point cloud data set corresponding to the synthetic object; and if the calculated shape similarity is larger than a second similarity threshold value, determining that the point cloud data included in the third point cloud data set is contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose.
In this step, the shape of the synthesis object is determined by a previously constructed shape synthesis model.
Specifically, the second similarity threshold is a similarity critical value representing that the synthetic object and the target object are the same object, and if the shape similarity is greater than the second similarity threshold, the synthetic object is determined to be the target object.
Wherein the shape synthesis model is trained by:
acquiring scanning data samples of a plurality of target objects to be detected in a parking area of an underground parking lot and an actual shape of each target object to be detected; and inputting the point cloud data set sample corresponding to the contour pixel point of each target object to be tested and the actual shape of the target object to be tested into a pre-constructed neural network model for training until the similarity between the shape of the synthetic object output by the neural network model and the actual shape of the target object to be tested is greater than a second similarity threshold value, and obtaining a trained shape synthetic model.
In the step, a deep learning model framework can be established in advance in the server, and can include a deep network of a YOLO series, a ResNet full convolution neural network and the like, and the models are high in learning speed and high in precision and are more suitable for analyzing and synthesizing the obtained point data set.
After the trained shape synthesis model is obtained, the shape synthesis model can be pruned and then sent to a computing platform of the robot, and the scanned shape data is processed in real time through the shape synthesis model on the robot.
For example, the first distance measuring sensor, the second distance measuring sensor and the third distance measuring sensor may be optical distance measuring instruments, infrared distance measuring sensors, laser distance measuring sensors, ultrasonic distance measuring sensors, radar sensors, etc.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for acquiring contour point cloud data by using a panoramic camera according to an embodiment of the present disclosure. As shown in fig. 2, step S110 specifically includes:
s1110, respectively determining the position coordinates of each target object according to the image information of a plurality of target objects in the parking area shot by the panoramic camera arranged on the robot; wherein the plurality of target objects include an elevator entrance and a plurality of columns in the parking area;
s1120, performing coordinate conversion processing on the position coordinate of each target object to obtain a reference coordinate of each target object in the electronic map of the parking area, and determining a reference object corresponding to the reference coordinate of each target object in the electronic map from the electronic map;
s1130, aiming at each target object, calculating the shape similarity between the target object and a reference object corresponding to the reference coordinate of the target object in the electronic map;
s1140, if the calculated shape similarity is smaller than a first similarity threshold, identifying contour pixels of each target object from the image information of the target object, adjusting a target pixel group to the contour pixels of the target object to change the shape of the target object, wherein the target pixel group comprises 50 pixels nearest to all the contour pixels of the target object, and returning to execute step S1130;
s1150, if the calculated shape similarity is greater than or equal to a first similarity threshold, the contour pixel points of the target object are not adjusted;
s1160, determining point cloud data corresponding to the contour pixel points of the target objects as contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose.
According to the scheme, a panoramic camera is mounted on a robot, a parking area is shot through the panoramic camera to obtain image information of a plurality of target objects located in the parking area, the position coordinates of each target object are determined from the image information, and then the actual position of each target object in the parking area is obtained. By means of the electronic map corresponding to the parking area, here a three-dimensional map, the reference coordinates of each target object in the electronic map of the parking area can be obtained by coordinate conversion processing, i.e. coordinate conversion processing of the position coordinates of each target object, so that the reference object corresponding to the reference coordinates can be found from the electronic map by means of the position coordinates of the target object identified by the panoramic camera.
The first similarity threshold is a similarity critical value representing that the target object and the reference object are the same object, and if the shape similarity is greater than or equal to the first similarity threshold, the contour pixel point of the target object is not adjusted any more, and the identified target object and the reference object can be considered to be the same object. The target pixel group refers to a set of 50 pixels with the minimum sum of distances to all contour pixels of the target object.
Here, in step S110, after the contour point cloud data is acquired by the panoramic camera, the contour point cloud data may be acquired by the first distance measuring sensor and the second distance measuring sensor together or by the third distance measuring sensor, and the contour point cloud data acquired by different means may be fused to obtain the contour point cloud data acquired from the parking area. Here, the method of fusion may be a weighted fusion method.
In steps S120 to S140, calculating a matching degree between the acquired contour point cloud data and reference point cloud data corresponding to the electronic map of the parking area; if the matching degree is smaller than a preset matching threshold value, calculating a difference value between each contour point cloud data and the reference point cloud data; and adjusting the current pose of the robot according to the difference value, and returning to execute the step S110.
In step S120, a matching degree between the acquired contour point cloud data and reference point cloud data corresponding to the electronic map of the parking area is calculated. Specifically, the following steps may be performed:
calculating the distance between the acquired contour point cloud data corresponding to the contour pixel point and the reference point cloud data corresponding to the electronic map of the parking area aiming at each contour pixel point corresponding to each target object; counting the target number of the contour pixel points with the calculated distance smaller than the distance threshold; calculating the ratio of the number of the targets to the total number of all contour pixel points of the target object; setting a first weight for the calculated ratio, and setting a second weight for the calculated shape similarity between the target object and a reference object corresponding to the reference coordinate of the target object in the electronic map; and determining the matching degree between the contour point cloud data and the reference point cloud data based on the ratio and the corresponding first weight thereof as well as the shape similarity and the corresponding second weight thereof.
Here, the shape similarity mainly includes a contour similarity and a content similarity between the actual target object and the reference object. The matching degree between the target object and the reference object is determined by taking the number ratio of the pixel points with the distance smaller than the distance threshold value in all the contour pixel points as the main and taking the shape similarity between the target object and the reference object as the auxiliary, wherein the first weight is between 0.6 and 0.8, the second weight is between 0.2 and 0.4, and the sum of the first weight and the second weight is 1.
In addition, the matching degree between the contour point cloud data corresponding to the target object in the parking area and the reference point cloud data corresponding to the electronic map of the parking area can be calculated in the following manner:
calculating the distance between the acquired contour point cloud data corresponding to the contour pixel point and the reference point cloud data corresponding to the electronic map of the parking area aiming at each contour pixel point corresponding to each target object; counting the target number of the contour pixel points with the calculated distance smaller than the distance threshold; calculating the ratio of the number of the targets to the total number of all contour pixel points of the target object; the ratio is determined as a degree of match between the contour point cloud data and the reference point cloud data.
This approach does not take the problem of shape into account, and can reduce computational effort and thereby increase computer resources.
In step S130, if the matching degree is smaller than a preset matching threshold, a difference between each contour point cloud data and the reference point cloud data is calculated.
In this step, for each pixel point, a difference between the contour point cloud data and the reference point cloud data corresponding to the pixel point needs to be calculated.
In step S140, the current pose of the robot is adjusted according to the difference, and the process returns to step S110.
Specifically, step S140 includes: traversing reference pixel points of all reference objects in an electronic map of the parking area; acquiring reference point cloud data corresponding to all reference pixel points with gray values larger than a preset gray value and contour point cloud data corresponding to contour pixel points of each target object; and adjusting the course direction of the robot along the direction of the parking area where the target object corresponding to the minimum difference value in all the difference values is located according to the difference value between the reference point cloud data and the corresponding outline point cloud data.
Here, some non-target objects can be filtered out by gray values. The smaller the difference between the reference point cloud data and the corresponding contour point cloud data is, the more accurate the robot is positioned. And then, when the difference between the reference point cloud data and the corresponding outline point cloud data is calculated, adjusting the heading direction of the robot along the direction of the parking area where the target object corresponding to the minimum difference is located. The minimum difference value is used for representing that the position of the target object in the parking area is closer to the position of the reference object in the electronic map, that is, the robot is more accurate in positioning for the target object, the robot can be debugged by taking the position as a standard, the heading angle of the robot is continuously adjusted, and then the steps S110 to S120 are continuously executed.
Further, in step S120, an electronic map of a parking area is determined by: acquiring a preset panoramic map of the underground parking lot, wherein the panoramic map comprises position coordinates of all reference objects in a map coordinate system; moving a predetermined number of times around the parking area at an arbitrary position of the parking area using the robot; identifying a static object in a parking area by using a sensor on the robot to obtain a relative position between the robot and the static object; wherein, the static object is an object which is in a static state in the process that the robot moves for a preset number of times and comprises an elevator opening and an upright post; determining a position identifier of a parking area where the robot is located according to the position of the robot in the parking area of the underground parking lot; the position mark comprises an area shape formed by the characteristics of the elevator opening and any three upright post characteristics arranged around the elevator opening; dividing the panoramic map into a plurality of sub-area maps according to the characteristics of the elevator entrance in advance; each sub-area map comprises an elevator port feature; according to the position identification, determining a target sub-area map comprising an area shape consisting of an elevator entrance characteristic and any three stand column characteristics arranged around the elevator entrance from a plurality of sub-area maps; and creating an electronic map which is centered by the robot and comprises the static objects in the target sub-area map according to the relative position between the robot and the static objects and the target sub-area map.
Here, the position identification is an identification extracted from a position of the robot in the parking area of the underground parking lot, which is capable of accurately indicating position information of the robot in the parking area. Further, the position indication includes a zone shape composed of the characteristics of the elevator shaft and any three pillar characteristics provided around the elevator shaft.
The electronic map of the parking area mainly comprises a target sub-area map by taking the position of the robot as the center.
For example, the electronic map may be a preset grid map.
In step S150, if the matching degree is not less than the preset matching threshold, determining the pose of the robot corresponding to the matching degree reaching the preset matching threshold as the target pose of the robot in the parking area, where the target pose includes a target position coordinate and a target course angle of the robot.
If the matching degree is not smaller than the preset matching threshold, determining the pose of the robot corresponding to the matching degree reaching the preset matching threshold as the target pose of the robot in the parking area.
In an optional embodiment, the method further comprises: in the process of controlling the robot to inspect in the parking area according to the target pose, acquiring environmental image data and laser point cloud data of the robot aiming at the parking area and walking pose data of the robot; the robot comprises a robot body, a panoramic camera, a sensor, a laser point cloud data acquisition module, a walking attitude data acquisition module, a panoramic camera, a robot and a robot, wherein the environmental image data is acquired by the panoramic camera, the laser point cloud data acquisition module, the walking attitude data is acquired by the sensor, and the walking attitude data is acquired by the sensor mounted on the robot; wherein the sensor comprises a first ranging sensor and a second ranging sensor, or the sensor comprises a third ranging sensor; fusing environment image data, laser point cloud data and walking posture data to obtain a patrol state characteristic value of the robot; inputting the inspection state characteristic value into an abnormal information detection model preset on the robot to obtain abnormal condition information of the parking area; the abnormal condition information comprises road surface abnormal information, environment abnormal information and walking posture abnormal information; and sending the obtained abnormal condition information to a server so that the server sends an alarm instruction generated based on the abnormal condition information to a target user side, wherein the target user side is a user connected with the robot.
Specifically, the step of fusing environment image data, laser point cloud data and walking attitude data to obtain the inspection state characteristic value of the robot comprises the following steps:
according to a preset first fusion method for a static object and a preset second fusion method for a non-static object, an adaptive target fusion method is distributed for the currently scanned object, and a fused inspection state characteristic value is obtained; the target fusion method is a first fusion method or a second fusion method; the first fusion method is a decision-level fusion method which is jointly decided by environment image data and laser point cloud data, and the second fusion method is a characteristic-level fusion method which mainly uses walking attitude data.
Determining that the currently scanned object is a stationary object by:
periodically extracting position information corresponding to each scanned object from the environment image data, and judging whether the position information corresponding to each scanned object is changed or not;
and if the position information corresponding to the scanned object does not change within a preset time period, determining that the currently scanned object is a static object.
In the step, the static object comprises a target object, such as an elevator entrance, an upright post and the like, and the position of the static object is not changed all the time, so that a first fusion method is adopted for the static object, and the first fusion method is a decision-level fusion method jointly decided by environment image data and laser point cloud data; the non-stationary objects such as pedestrians, vehicles in driving, etc. need to adopt a second fusion method because the positions of the non-stationary objects are changed constantly, and the second fusion method is a feature level fusion method mainly based on walking posture data.
By using an icp point cloud matching algorithm in a robot, the self-positioning function of the robot can be realized, and the positioning state can be reported in real time.
For example, as shown in fig. 3, an arrow is a head direction (heading direction) of the robot 310, a blank area is a travelable area in a parking area, the parking area includes an elevator 320, a plurality of columns 330, a pedestrian 340 on the elevator 320, and the like, and when the robot performs positioning in the parking area by using the existing positioning method, the positioning line 350 does not coincide with a boundary of a target object (the elevator 320, the plurality of columns 330), and the overall deviation is large. However, after the positioning method according to the embodiment of the present application is adopted, as shown in fig. 4, the positioning line 350 is overlapped with most of the target objects (the elevator 320 and the plurality of columns 330), and the overlap ratio between the target object scanned by the robot and the electronic map (the preset grid map) can be calculated to reach 70%. Here, the positioning line 350 in fig. 3 and 4 is a line when the robot performs positioning in the parking area, and the position of the positioning line 350 may change depending on the positioning method of the robot.
For example, tf transformation of the robot is set to realize the transformation of the poses of the coordinate systems of the sensors on the robot, and because each sensor has the own xyz spatial position, the relative pose transformation of the sensor relative to the preset grid map can be obtained through the tf transformation; then subscribing sensor data and a preset grid map, wherein more than 2000 data can be measured by one circle of scanning of the sensor, the scanned data can be broadcasted by the sensor, and the robot operating system can receive the broadcasted data; the preset grid map is a picture in a pgm format, one pixel is 5cm according to pixel point calculation, the preset grid map is a gray map, white represents a drivable area, and black represents a target object; then, subscribing a control semaphore, wherein the control semaphore is an instruction broadcasted by a mobile phone app or a webpage end, and is triggered similarly, then an icp point cloud matching algorithm in a robot operating system starts to execute matching operation, and an initial pose is issued, and the initial pose is a position coordinate (xy coordinate) and a heading angle of the robot in a preset grid map; and then, calculating the point cloud matching degree between the scanned sensor data and two corresponding points in a preset grid map, dividing the point cloud matching degree by the total number of the points after traversing all the points to obtain the matching percentage, judging whether the percentage is more than 60%, if the percentage is more than 60%, judging that the positioning is successful, otherwise, failing to position, calculating the rotation and translation variables before and after the matching, and issuing the pose after accurate registration to replace the initial pose, thereby obtaining the accurate positioning of the robot. The sensors may include a first ranging sensor and a second ranging sensor, or a third ranging sensor.
Compared with the method in the prior art that an approximate position is manually set during initial positioning of the robot and then the robot is rotated to enable the position scanned by the robot to be matched with an electronic map of an underground parking lot, the robot in the method can calculate the matching degree of the current position of the robot in the electronic map through automatic circulation, and is matched with the electronic map in real time to perform accurate positioning, so that the positioning accuracy of the robot is improved.
Based on the same inventive concept, the embodiment of the present application further provides a positioning apparatus of a robot corresponding to the positioning method of the robot, and since the principle of the apparatus in the embodiment of the present application for solving the problem is similar to the positioning method described above in the embodiment of the present application, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a positioning device of a robot according to an embodiment of the present disclosure, as shown in fig. 5, a positioning device 500 includes:
the data acquisition module 510 is configured to acquire contour point cloud data acquired by the robot in a parking area of the underground parking lot according to a current pose; the parking area comprises at least one elevator entrance and a plurality of columns;
a matching degree calculation module 520, configured to calculate a matching degree between the acquired contour point cloud data and reference point cloud data corresponding to the electronic map of the parking area;
a difference calculation module 530, configured to calculate a difference between each contour point cloud data and the reference point cloud data if the matching degree is smaller than a preset matching threshold;
the cyclic execution module 540 is used for adjusting the current pose of the robot according to the difference value, and returning to the execution step to acquire contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose; the parking area comprises at least one elevator entrance and a plurality of columns;
and a pose determining module 550, configured to determine, if the matching degree is not smaller than a preset matching threshold, a pose of the robot corresponding to the matching degree that reaches the preset matching threshold as a target pose of the robot in the parking area.
Optionally, a panoramic camera is mounted on the robot; the data acquisition module 510 is configured to:
s1110, respectively determining the position coordinates of each target object according to the image information of a plurality of target objects in the parking area shot by the panoramic camera arranged on the robot; wherein the plurality of target objects include an elevator entrance and a plurality of columns in the parking area;
s1120, performing coordinate conversion processing on the position coordinate of each target object to obtain a reference coordinate of each target object in the electronic map of the parking area, and determining a reference object corresponding to the reference coordinate of each target object in the electronic map from the electronic map;
s1130, aiming at each target object, calculating the shape similarity between the target object and a reference object corresponding to the reference coordinate of the target object in the electronic map;
s1140, if the calculated shape similarity is smaller than a first similarity threshold, identifying contour pixels of each target object from the image information of the target object, adjusting a target pixel group to the contour pixels of the target object to change the shape of the target object, wherein the target pixel group comprises 50 pixels nearest to all the contour pixels of the target object, and returning to execute step S1130;
s1150, if the calculated shape similarity is greater than or equal to a first similarity threshold, the contour pixel points of the target object are not adjusted;
s1160, determining point cloud data corresponding to the contour pixel points of the target objects as contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose.
Optionally, a first distance measuring sensor is installed at a first position of the robot, a second distance measuring sensor is installed at a second position of the robot, and the first position is different from the second position; the data obtaining module 510 is specifically configured to:
acquiring first scanning data obtained by scanning the parking area for one circle by using a first ranging sensor according to the current pose of the robot;
determining a first point cloud data set corresponding to the contour pixel point of each target object from the first scanning data;
comparing the first point cloud data set corresponding to the determined contour pixel point of each target object with a preset point cloud set of the target object; the preset point cloud set comprises point cloud data corresponding to contour pixel points of target objects in the electronic map;
if the comparison result is consistent, the second distance measuring sensor is not started, and the first point cloud data set corresponding to the contour pixel point of each target object is determined as the point cloud data set to be processed;
if the comparison result is inconsistent, starting a second distance measuring sensor to work, scanning the parking area for one circle by using the second distance measuring sensor to obtain second scanning data, determining a second point cloud data set corresponding to the contour pixel point of each target object from the second scanning data, clustering the point cloud data in the first point cloud data set and the second point cloud data set to obtain a clustered point cloud data set, and determining the clustered point cloud data set as a point cloud data set to be processed;
and determining all point cloud data in the point cloud data set to be processed as contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose.
Optionally, a third distance measuring sensor is arranged on the robot; the data obtaining module 510 is specifically configured to:
acquiring third scanning data obtained by scanning the parking area for one circle by using the third ranging sensor according to the current pose of the robot;
determining a third point cloud data set corresponding to the contour pixel point of each target object from the third scanning data;
inputting the third point cloud data set into a pre-trained shape synthesis model to obtain the shape of a synthesis object;
calculating the shape similarity between the synthetic object and a target object corresponding to a third point cloud data set corresponding to the synthetic object;
if the calculated shape similarity is larger than a second similarity threshold value, determining point cloud data included in the third point cloud data set as contour point cloud data acquired by the robot in a parking area of an underground parking lot according to the current pose;
wherein the shape synthesis model is trained by:
acquiring scanning data samples of a plurality of target objects to be detected in a parking area of an underground parking lot and an actual shape of each target object to be detected;
and inputting the point cloud data set sample corresponding to the contour pixel point of each target object to be tested and the actual shape of the target object to be tested into a pre-constructed neural network model for training until the similarity between the shape of the synthetic object output by the neural network model and the actual shape of the target object to be tested is greater than a second similarity threshold value, and obtaining a trained shape synthetic model.
Optionally, the matching degree calculating module 520 is specifically configured to:
calculating the distance between the acquired contour point cloud data corresponding to the contour pixel point and the reference point cloud data corresponding to the electronic map of the parking area aiming at each contour pixel point corresponding to each target object;
counting the target number of the contour pixel points with the calculated distance smaller than the distance threshold;
calculating the ratio of the number of the targets to the total number of all contour pixel points of the target object;
setting a first weight for the calculated ratio, and setting a second weight for the calculated shape similarity between the target object and a reference object corresponding to the reference coordinate of the target object in the electronic map;
and determining the matching degree between the contour point cloud data and the reference point cloud data based on the ratio and the corresponding first weight thereof, and the shape similarity and the corresponding second weight thereof.
Optionally, the loop execution module 540 is specifically configured to:
traversing reference pixel points of all reference objects in the electronic map of the parking area;
acquiring reference point cloud data corresponding to reference pixel points with gray values larger than preset gray values and contour point cloud data corresponding to contour pixel points of each target object;
and adjusting the course direction of the robot along the direction of the parking area where the target object corresponding to the minimum difference value in all the difference values is located according to the difference value between the reference point cloud data and the corresponding contour point cloud data.
Optionally, the matching degree calculating module 520 is specifically configured to determine the electronic map of the parking area by:
acquiring a preset panoramic map of an underground parking lot, wherein the panoramic map comprises position coordinates of all reference objects in a map coordinate system;
moving a predetermined number of times around the parking area at an arbitrary position of the parking area using a robot;
identifying a stationary object in the parking area using a sensor on the robot, resulting in a relative position between the robot and the stationary object; the static object is an object which is in a static state in the process that the robot moves for a preset number of times;
determining a position identifier of a parking area where the robot is located according to the position of the robot in the parking area of the underground parking lot; the position mark comprises an area shape formed by an elevator entrance characteristic and any three upright post characteristics arranged around the elevator entrance;
dividing the panoramic map into a plurality of sub-area maps according to the characteristics of the elevator entrance in advance; each of the sub-area maps contains an elevator landing feature;
according to the position identification, determining a target sub-area map comprising the area shape from the plurality of sub-area maps;
according to the relative position between the robot and the static object and the target sub-area map, creating an electronic map which is centered by the robot and comprises the static object in the target sub-area map; wherein the stationary object comprises the target object.
Optionally, the positioning device further comprises an inspection module (not shown in the figure), the inspection module being configured to:
in the process of controlling the robot to inspect in the parking area according to the target pose, acquiring environmental image data and laser point cloud data of the robot aiming at the parking area and walking pose data of the robot; the robot comprises a robot body, a panoramic camera, a sensor, a laser point cloud data acquisition module, a walking attitude data acquisition module, a panoramic camera, a robot and a robot, wherein the environmental image data is acquired by the panoramic camera, the laser point cloud data acquisition module, the walking attitude data is acquired by the sensor, and the walking attitude data is acquired by the sensor mounted on the robot; wherein the sensor comprises a first ranging sensor and a second ranging sensor, or the sensor comprises a third ranging sensor;
fusing the environment image data, the laser point cloud data and the walking attitude data to obtain a patrol state characteristic value of the robot;
inputting the inspection state characteristic value into an abnormal information detection model preset on the robot to obtain abnormal condition information of the parking area; the abnormal condition information comprises road surface abnormal information, environment abnormal information and walking posture abnormal information;
and sending the obtained abnormal condition information to a server so that the server sends an alarm instruction generated based on the abnormal condition information to a target user side, wherein the target user side is a user connected with the robot.
Optionally, the inspection module is specifically configured to:
according to a preset first fusion method for a static object and a preset second fusion method for a non-static object, an adaptive target fusion method is distributed for the currently scanned object, and a fused inspection state characteristic value is obtained; the target fusion method is a first fusion method or a second fusion method; the first fusion method is a decision-level fusion method which is jointly decided by environment image data and laser point cloud data, and the second fusion method is a characteristic-level fusion method which mainly uses walking attitude data;
determining that the currently scanned object is a stationary object by:
periodically extracting position information corresponding to each scanned object from the environment image data, and judging whether the position information corresponding to each scanned object is changed or not;
and if the position information corresponding to the scanned object does not change within a preset time period, determining that the currently scanned object is a static object.
Compared with the method that an approximate position is manually set during initial positioning of the robot and then the robot is rotated to enable the position scanned by the robot to be matched with an electronic map of an underground parking lot in the prior art, the robot can calculate the matching degree of the current position of the robot in the electronic map through automatic circulation, and is matched with the electronic map in real time to perform accurate positioning, so that the positioning accuracy of the robot is improved.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 6, the electronic device 600 includes a processor 610, a memory 620, and a bus 630.
The memory 620 stores machine-readable instructions executable by the processor 610, when the electronic device 600 runs, the processor 610 communicates with the memory 620 through the bus 630, and when the machine-readable instructions are executed by the processor 610, the steps of obtaining contour point cloud data by using a panoramic camera in the method embodiment shown in fig. 1 and the method embodiment shown in fig. 2 may be performed.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for positioning a robot in the embodiment of the method shown in fig. 1 and the step of acquiring contour point cloud data by using a panoramic camera in the embodiment of the method shown in fig. 2 may be executed.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A method for positioning a robot, the method comprising:
s110, acquiring contour point cloud data acquired by the robot in a parking area of the underground parking lot according to the current pose; the parking area comprises at least one elevator entrance and a plurality of columns;
s120, calculating the matching degree between the acquired outline point cloud data and reference point cloud data corresponding to the electronic map of the parking area;
s130, if the matching degree is smaller than a preset matching threshold, calculating a difference value between each contour point cloud data and the reference point cloud data;
s140, adjusting the current pose of the robot according to the difference value, and returning to execute the step S110;
s150, if the matching degree is not smaller than a preset matching threshold, determining the pose of the robot corresponding to the matching degree reaching the preset matching threshold as the target pose of the robot in the parking area.
2. The positioning method according to claim 1, wherein a panoramic camera is mounted on the robot; step S110 includes:
s1110, respectively determining the position coordinates of each target object according to the image information of a plurality of target objects in the parking area shot by the panoramic camera arranged on the robot; wherein the plurality of target objects include an elevator entrance and a plurality of columns in the parking area;
s1120, performing coordinate conversion processing on the position coordinate of each target object to obtain a reference coordinate of each target object in the electronic map of the parking area, and determining a reference object corresponding to the reference coordinate of each target object in the electronic map from the electronic map;
s1130, aiming at each target object, calculating the shape similarity between the target object and a reference object corresponding to the reference coordinate of the target object in the electronic map;
s1140, if the calculated shape similarity is smaller than a first similarity threshold, identifying contour pixels of each target object from the image information of the target object, adjusting a target pixel group to the contour pixels of the target object to change the shape of the target object, wherein the target pixel group comprises 50 pixels nearest to all the contour pixels of the target object, and returning to execute step S1130;
s1150, if the calculated shape similarity is greater than or equal to a first similarity threshold, the contour pixel points of the target object are not adjusted;
s1160, determining point cloud data corresponding to the contour pixel points of the target objects as contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose.
3. The positioning method according to claim 2, wherein a first distance measuring sensor is installed at a first position of the robot, and a second distance measuring sensor is installed at a second position, the first position being different from the second position; step S110 further includes:
acquiring first scanning data obtained by scanning the parking area for one circle by using a first ranging sensor according to the current pose of the robot;
determining a first point cloud data set corresponding to the contour pixel point of each target object from the first scanning data;
comparing the first point cloud data set corresponding to the determined contour pixel point of each target object with a preset point cloud set of the target object; the preset point cloud set comprises point cloud data corresponding to contour pixel points of target objects in the electronic map;
if the comparison result is consistent, the second distance measuring sensor is not started, and the first point cloud data set corresponding to the contour pixel point of each target object is determined as the point cloud data set to be processed;
if the comparison result is inconsistent, starting a second distance measuring sensor to work, scanning the parking area for one circle by using the second distance measuring sensor to obtain second scanning data, determining a second point cloud data set corresponding to the contour pixel point of each target object from the second scanning data, clustering the point cloud data in the first point cloud data set and the second point cloud data set to obtain a clustered point cloud data set, and determining the clustered point cloud data set as a point cloud data set to be processed;
and determining all point cloud data in the point cloud data set to be processed as contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose.
4. The positioning method according to claim 2, wherein a third distance measuring sensor is provided on the robot; step S110 further includes:
acquiring third scanning data obtained by scanning the parking area for one circle by using the third ranging sensor according to the current pose of the robot;
determining a third point cloud data set corresponding to the contour pixel point of each target object from the third scanning data;
inputting the third point cloud data set into a pre-trained shape synthesis model to obtain the shape of a synthesis object;
calculating the shape similarity between the synthetic object and a target object corresponding to a third point cloud data set corresponding to the synthetic object;
if the calculated shape similarity is larger than a second similarity threshold value, determining point cloud data included in the third point cloud data set as contour point cloud data acquired by the robot in a parking area of an underground parking lot according to the current pose;
wherein the shape synthesis model is trained by:
acquiring scanning data samples of a plurality of target objects to be detected in a parking area of an underground parking lot and an actual shape of each target object to be detected;
and inputting the point cloud data set sample corresponding to the contour pixel point of each target object to be tested and the actual shape of the target object to be tested into a pre-constructed neural network model for training until the similarity between the shape of the synthetic object output by the neural network model and the actual shape of the target object to be tested is greater than a second similarity threshold value, and obtaining a trained shape synthetic model.
5. The positioning method according to claim 3 or 4, wherein step S120 comprises:
calculating the distance between the acquired contour point cloud data corresponding to the contour pixel point and the reference point cloud data corresponding to the electronic map of the parking area aiming at each contour pixel point corresponding to each target object;
counting the target number of the contour pixel points with the calculated distance smaller than the distance threshold;
calculating the ratio of the number of the targets to the total number of all contour pixel points of the target object;
setting a first weight for the calculated ratio, and setting a second weight for the calculated shape similarity between the target object and a reference object corresponding to the reference coordinate of the target object in the electronic map;
and determining the matching degree between the contour point cloud data and the reference point cloud data based on the ratio and the corresponding first weight thereof, and the shape similarity and the corresponding second weight thereof.
6. The positioning method according to claim 3 or 4, wherein the step of adjusting the current pose of the robot according to the difference value in step S140 comprises:
traversing reference pixel points of all reference objects in the electronic map of the parking area;
acquiring reference point cloud data corresponding to reference pixel points with gray values larger than preset gray values and contour point cloud data corresponding to contour pixel points of each target object;
and adjusting the course direction of the robot along the direction of the parking area where the target object corresponding to the minimum difference value in all the difference values is located according to the difference value between the reference point cloud data and the corresponding contour point cloud data.
7. The positioning method according to claim 3 or 4, characterized in that the electronic map of the parking area is determined by:
acquiring a preset panoramic map of an underground parking lot, wherein the panoramic map comprises position coordinates of all reference objects in a map coordinate system;
moving a predetermined number of times around the parking area at an arbitrary position of the parking area using a robot;
identifying a stationary object in the parking area using a sensor on the robot, resulting in a relative position between the robot and the stationary object; the static object is an object which is in a static state in the process that the robot moves for a preset number of times;
determining a position identifier of a parking area where the robot is located according to the position of the robot in the parking area of the underground parking lot; the position mark comprises an area shape formed by an elevator entrance characteristic and any three upright post characteristics arranged around the elevator entrance;
dividing the panoramic map into a plurality of sub-area maps according to the characteristics of the elevator entrance in advance; each of the sub-area maps contains an elevator landing feature;
according to the position identification, determining a target sub-area map comprising the area shape from the plurality of sub-area maps;
according to the relative position between the robot and the static object and the target sub-area map, creating an electronic map which is centered by the robot and comprises the static object in the target sub-area map; wherein the stationary object comprises the target object.
8. The positioning method according to claim 7, further comprising:
in the process of controlling the robot to inspect in the parking area according to the target pose, acquiring environmental image data and laser point cloud data of the robot aiming at the parking area and walking pose data of the robot; the robot comprises a robot body, a panoramic camera, a sensor, a laser point cloud data acquisition module, a walking attitude data acquisition module, a panoramic camera, a robot and a robot, wherein the environmental image data is acquired by the panoramic camera, the laser point cloud data acquisition module, the walking attitude data is acquired by the sensor, and the walking attitude data is acquired by the sensor mounted on the robot; wherein the sensor comprises a first ranging sensor and a second ranging sensor, or the sensor comprises a third ranging sensor;
fusing the environment image data, the laser point cloud data and the walking attitude data to obtain a patrol state characteristic value of the robot;
inputting the inspection state characteristic value into an abnormal information detection model preset on the robot to obtain abnormal condition information of the parking area; the abnormal condition information comprises road surface abnormal information, environment abnormal information and walking posture abnormal information;
and sending the obtained abnormal condition information to a server so that the server sends an alarm instruction generated based on the abnormal condition information to a target user side, wherein the target user side is a user connected with the robot.
9. The positioning method according to claim 8, wherein the step of fusing the environment image data, the laser point cloud data and the walking posture data to obtain the inspection state characteristic value of the robot comprises the steps of:
according to a preset first fusion method for a static object and a preset second fusion method for a non-static object, an adaptive target fusion method is distributed for the currently scanned object, and a fused inspection state characteristic value is obtained; the target fusion method is a first fusion method or a second fusion method; the first fusion method is a decision-level fusion method which is jointly decided by environment image data and laser point cloud data, and the second fusion method is a characteristic-level fusion method which mainly uses walking attitude data;
determining that the currently scanned object is a stationary object by:
periodically extracting position information corresponding to each scanned object from the environment image data, and judging whether the position information corresponding to each scanned object is changed or not;
and if the position information corresponding to the scanned object does not change within a preset time period, determining that the currently scanned object is a static object.
10. A positioning device of a robot, characterized in that the positioning device comprises:
the data acquisition module is used for acquiring contour point cloud data acquired by the robot in a parking area of the underground parking lot according to the current pose; the parking area comprises at least one elevator entrance and a plurality of columns;
the matching degree calculation module is used for calculating the matching degree between the acquired outline point cloud data and the reference point cloud data corresponding to the electronic map of the parking area;
the difference value calculation module is used for calculating the difference value between each contour point cloud data and the reference point cloud data if the matching degree is smaller than a preset matching threshold value;
the cyclic execution module is used for adjusting the current pose of the robot according to the difference value, and returning to the execution step to acquire contour point cloud data acquired by the robot in the parking area of the underground parking lot according to the current pose;
and the pose determining module is used for determining the pose of the robot corresponding to the matching degree reaching the preset matching threshold as the target pose of the robot in the parking area if the matching degree is not less than the preset matching threshold.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210274358.8A CN114353807B (en) | 2022-03-21 | 2022-03-21 | Robot positioning method and positioning device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210274358.8A CN114353807B (en) | 2022-03-21 | 2022-03-21 | Robot positioning method and positioning device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114353807A true CN114353807A (en) | 2022-04-15 |
CN114353807B CN114353807B (en) | 2022-08-12 |
Family
ID=81094618
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210274358.8A Active CN114353807B (en) | 2022-03-21 | 2022-03-21 | Robot positioning method and positioning device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114353807B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115685133A (en) * | 2022-12-30 | 2023-02-03 | 安徽蔚来智驾科技有限公司 | Positioning method for autonomous vehicle, control device, storage medium, and vehicle |
CN116185046A (en) * | 2023-04-27 | 2023-05-30 | 北京宸普豪新科技有限公司 | Mobile robot positioning method, mobile robot and medium |
CN117991259A (en) * | 2024-04-07 | 2024-05-07 | 陕西欧卡电子智能科技有限公司 | Unmanned ship repositioning method and device based on laser radar and millimeter wave radar |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108873001A (en) * | 2018-09-17 | 2018-11-23 | 江苏金智科技股份有限公司 | A kind of accurate method for judging robot localization precision |
WO2019037484A1 (en) * | 2017-08-23 | 2019-02-28 | 腾讯科技(深圳)有限公司 | Laser scanning device calibration method, apparatus, device, and storage medium |
CN110297224A (en) * | 2019-08-01 | 2019-10-01 | 深圳前海达闼云端智能科技有限公司 | Laser radar positioning method and device, robot and computing equipment |
CN110689622A (en) * | 2019-07-05 | 2020-01-14 | 电子科技大学 | Synchronous positioning and composition algorithm based on point cloud segmentation matching closed-loop correction |
US20200206927A1 (en) * | 2018-12-29 | 2020-07-02 | Ubtech Robotics Corp Ltd | Relocalization method and robot using the same |
CN111708047A (en) * | 2020-06-16 | 2020-09-25 | 浙江大华技术股份有限公司 | Robot positioning evaluation method, robot and computer storage medium |
CN111895989A (en) * | 2020-06-24 | 2020-11-06 | 浙江大华技术股份有限公司 | Robot positioning method and device and electronic equipment |
CN112014857A (en) * | 2020-08-31 | 2020-12-01 | 上海宇航系统工程研究所 | Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot |
WO2021219023A1 (en) * | 2020-04-30 | 2021-11-04 | 北京猎户星空科技有限公司 | Positioning method and apparatus, electronic device, and storage medium |
CN113664848A (en) * | 2021-08-27 | 2021-11-19 | 沈阳吕尚科技有限公司 | Inspection robot and working method thereof |
WO2022007602A1 (en) * | 2020-07-09 | 2022-01-13 | 北京京东乾石科技有限公司 | Method and apparatus for determining location of vehicle |
WO2022007504A1 (en) * | 2020-07-09 | 2022-01-13 | 北京京东乾石科技有限公司 | Location determination method, device, and system, and computer readable storage medium |
WO2022021132A1 (en) * | 2020-07-29 | 2022-02-03 | 上海高仙自动化科技发展有限公司 | Computer device positioning method and apparatus, computer device, and storage medium |
CN114102577A (en) * | 2020-08-31 | 2022-03-01 | 北京极智嘉科技股份有限公司 | Robot and positioning method applied to robot |
-
2022
- 2022-03-21 CN CN202210274358.8A patent/CN114353807B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019037484A1 (en) * | 2017-08-23 | 2019-02-28 | 腾讯科技(深圳)有限公司 | Laser scanning device calibration method, apparatus, device, and storage medium |
CN108873001A (en) * | 2018-09-17 | 2018-11-23 | 江苏金智科技股份有限公司 | A kind of accurate method for judging robot localization precision |
US20200206927A1 (en) * | 2018-12-29 | 2020-07-02 | Ubtech Robotics Corp Ltd | Relocalization method and robot using the same |
CN110689622A (en) * | 2019-07-05 | 2020-01-14 | 电子科技大学 | Synchronous positioning and composition algorithm based on point cloud segmentation matching closed-loop correction |
CN110297224A (en) * | 2019-08-01 | 2019-10-01 | 深圳前海达闼云端智能科技有限公司 | Laser radar positioning method and device, robot and computing equipment |
WO2021219023A1 (en) * | 2020-04-30 | 2021-11-04 | 北京猎户星空科技有限公司 | Positioning method and apparatus, electronic device, and storage medium |
CN111708047A (en) * | 2020-06-16 | 2020-09-25 | 浙江大华技术股份有限公司 | Robot positioning evaluation method, robot and computer storage medium |
CN111895989A (en) * | 2020-06-24 | 2020-11-06 | 浙江大华技术股份有限公司 | Robot positioning method and device and electronic equipment |
WO2022007602A1 (en) * | 2020-07-09 | 2022-01-13 | 北京京东乾石科技有限公司 | Method and apparatus for determining location of vehicle |
WO2022007504A1 (en) * | 2020-07-09 | 2022-01-13 | 北京京东乾石科技有限公司 | Location determination method, device, and system, and computer readable storage medium |
WO2022021132A1 (en) * | 2020-07-29 | 2022-02-03 | 上海高仙自动化科技发展有限公司 | Computer device positioning method and apparatus, computer device, and storage medium |
CN112014857A (en) * | 2020-08-31 | 2020-12-01 | 上海宇航系统工程研究所 | Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot |
CN114102577A (en) * | 2020-08-31 | 2022-03-01 | 北京极智嘉科技股份有限公司 | Robot and positioning method applied to robot |
CN113664848A (en) * | 2021-08-27 | 2021-11-19 | 沈阳吕尚科技有限公司 | Inspection robot and working method thereof |
Non-Patent Citations (6)
Title |
---|
XU CHEN等: "Model-based point cloud alignment with principle component analysis for robot welding", 《2017 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND INTELLIGENT SYSTEMS (ARIS)》 * |
XU CHEN等: "Model-based point cloud alignment with principle component analysis for robot welding", 《2017 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND INTELLIGENT SYSTEMS (ARIS)》, 8 September 2017 (2017-09-08), pages 83 - 87, XP033323477, DOI: 10.1109/ARIS.2017.8297194 * |
杨奇峰等: "基于3D-NDT的移动机器人定位算法研究", 《控制工程》 * |
杨奇峰等: "基于3D-NDT的移动机器人定位算法研究", 《控制工程》, no. 04, 20 April 2020 (2020-04-20) * |
蔡军等: "基于Kinect的改进移动机器人视觉SLAM", 《智能系统学报》 * |
蔡军等: "基于Kinect的改进移动机器人视觉SLAM", 《智能系统学报》, no. 05, 24 April 2018 (2018-04-24) * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115685133A (en) * | 2022-12-30 | 2023-02-03 | 安徽蔚来智驾科技有限公司 | Positioning method for autonomous vehicle, control device, storage medium, and vehicle |
CN116185046A (en) * | 2023-04-27 | 2023-05-30 | 北京宸普豪新科技有限公司 | Mobile robot positioning method, mobile robot and medium |
CN117991259A (en) * | 2024-04-07 | 2024-05-07 | 陕西欧卡电子智能科技有限公司 | Unmanned ship repositioning method and device based on laser radar and millimeter wave radar |
Also Published As
Publication number | Publication date |
---|---|
CN114353807B (en) | 2022-08-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114353807B (en) | Robot positioning method and positioning device | |
CN110136199B (en) | Camera-based vehicle positioning and mapping method and device | |
KR102305328B1 (en) | System and method of Automatically Generating High Definition Map Based on Camera Images | |
CN105512646B (en) | A kind of data processing method, device and terminal | |
US8983124B2 (en) | Moving body positioning device | |
CN110136058B (en) | Drawing construction method based on overlook spliced drawing and vehicle-mounted terminal | |
US11195016B2 (en) | Pile head analysis system, pile head analysis method, and storage medium in which pile head analysis program is stored | |
CN111856963B (en) | Parking simulation method and device based on vehicle-mounted looking-around system | |
CN110344621A (en) | A kind of wheel points cloud detection method of optic towards intelligent garage | |
US20160364619A1 (en) | Vehicle-Surroundings Recognition Device | |
CN111814752B (en) | Indoor positioning realization method, server, intelligent mobile device and storage medium | |
US11087224B2 (en) | Out-of-vehicle communication device, out-of-vehicle communication method, information processing device, and computer readable medium | |
CN112949366B (en) | Obstacle identification method and device | |
CN109099915A (en) | Method for positioning mobile robot, device, computer equipment and storage medium | |
CN107527368B (en) | Three-dimensional space attitude positioning method and device based on two-dimensional code | |
CN114639085A (en) | Traffic signal lamp identification method and device, computer equipment and storage medium | |
JP2022042146A (en) | Data processor, data processing method, and data processing program | |
US12067471B2 (en) | Searching an autonomous vehicle sensor data repository based on context embedding | |
CN112292580A (en) | Positioning system and method for operating the same | |
CN111862216B (en) | Computer equipment positioning method, device, computer equipment and storage medium | |
CN113158779B (en) | Walking method, walking device and computer storage medium | |
Wang et al. | 3D-LIDAR based branch estimation and intersection location for autonomous vehicles | |
CN112699748B (en) | Human-vehicle distance estimation method based on YOLO and RGB image | |
CN117570959A (en) | Man-machine collaborative rescue situation map construction method | |
CN117124332A (en) | Mechanical arm control method and system based on AI vision grabbing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |