Nothing Special   »   [go: up one dir, main page]

CN118463996B - Decentralizing multi-robot co-location method and system - Google Patents

Decentralizing multi-robot co-location method and system Download PDF

Info

Publication number
CN118463996B
CN118463996B CN202410912557.6A CN202410912557A CN118463996B CN 118463996 B CN118463996 B CN 118463996B CN 202410912557 A CN202410912557 A CN 202410912557A CN 118463996 B CN118463996 B CN 118463996B
Authority
CN
China
Prior art keywords
robot
point cloud
data
relative position
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410912557.6A
Other languages
Chinese (zh)
Other versions
CN118463996A (en
Inventor
虞永方
戴德云
吴疆
朱开元
卫榆松
叶宇辰
仲健
余楚恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zhiyuan Research Institute Co ltd
Original Assignee
Hangzhou Zhiyuan Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Zhiyuan Research Institute Co ltd filed Critical Hangzhou Zhiyuan Research Institute Co ltd
Priority to CN202410912557.6A priority Critical patent/CN118463996B/en
Publication of CN118463996A publication Critical patent/CN118463996A/en
Application granted granted Critical
Publication of CN118463996B publication Critical patent/CN118463996B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a decentralization multi-robot co-location method and a system, wherein the method comprises the following steps: step 1, each robot collects data through a plurality of sensors and performs preprocessing, and a global map is built based on the collected data; step 2, each single robot adopts VGICP positioning algorithm to match and position the current laser radar point cloud of the robot and the global map, so as to obtain global positioning information; step 3, each single robot obtains the relative position information of surrounding robots; and 4, updating and correcting the relative position information of each single robot through a filtering algorithm based on the global positioning information and the relative position information. The invention adopts a decentralised Kalman filtering method to fuse the position information, thereby realizing high-precision real-time relative pose estimation.

Description

Decentralizing multi-robot co-location method and system
Technical Field
The invention relates to a multi-robot co-location method, in particular to a multi-robot co-location method and a multi-robot co-location system for decentralization.
Background
In task scenes such as multi-robot searching and rescuing, multi-robot cooperative carrying and the like, the robots are effectively cooperated, so that the positions of the robots are accurately recognized, and meanwhile, the position information of other robots is required to be known, and therefore, the positioning problem in the multi-robot system has critical significance. However, the conventional positioning method has a plurality of limitations:
1) The central node relies on: conventional positioning methods typically rely on a central node to integrate all information, which results in increased communication burden and reduced system scalability, and in large multi-robot systems, the communication bottleneck of such a central node may become a serious problem;
2) The positioning accuracy is not enough: conventional methods such as GPS or Wi-Fi positioning often do not provide adequate positioning accuracy, and in many application scenarios, particularly those requiring high-accuracy positioning, such as search and rescue tasks, collaborative handling tasks, etc., such limited positioning accuracy may become a bottleneck limiting system performance;
3) The indoor environment is not applicable: many conventional positioning methods, such as GPS, cannot be used in indoor environments, and in many practical applications, such as in building interiors, underground or underwater locations where there is no GPS signal, these conventional positioning methods cannot be used.
In view of the defects of the traditional method, the invention provides a decentralised multi-robot co-location method and system.
Disclosure of Invention
The invention aims to provide a decentralised multi-robot cooperative positioning method and system, which are combined with VGICP and a local UWB ranging positioning method, and the decentralised Kalman filtering method is adopted to fuse position information, so that high-precision real-time relative pose estimation is realized.
The technical solution for realizing the purpose of the invention is as follows:
A multi-robot co-location method with decentralization comprises the following steps:
Step 1, each robot collects data through a plurality of sensors and performs preprocessing, and a global map is built based on the collected data;
Step 2, each single robot adopts VGICP positioning algorithm to match and position the current laser radar point cloud of the robot and the global map, so as to obtain global positioning information;
step 3, each single robot obtains the relative position information of surrounding robots;
and 4, updating and correcting the relative position information of each single robot through a filtering algorithm based on the global positioning information and the relative position information.
Further, the plurality of sensors includes a lidar, a vision camera, and an inertial measurement unit.
Further, the preprocessing includes:
time synchronization and calibration are carried out on the acquired data, and the obtained data comprise: point cloud frame sequence Image frame sequenceInertial measurement data sequenceWhereinIs the sequence length;
Filtering, denoising and interpolating the point Yun Zhen sequence, image frame sequence and inertial measurement data sequence data, and performing format conversion and standardization on the data according to the sensor type and data format.
Further, the obtaining global positioning information specifically includes: data fusion is carried out by adopting a filtering algorithm, the geometry and characteristics of the environment are estimated by utilizing the fused data through a SLAM algorithm, and a global map is constructed; And finally, smoothing the constructed global map and removing redundant information.
Further, the step 2 specifically includes:
Step 2-1, preprocessing the current laser radar point cloud by adopting filtering and downsampling;
step 2-2, voxelized the preprocessed point cloud and the global map point cloud, namely dividing the three-dimensional space into small three-dimensional grids, representing the point cloud data in each voxel by a representative point and a statistical distribution, and marking the voxels of the current point cloud as Wherein the firstThree-dimensional gridIs thatThe center point of the grid is defined by the center point,Is thatA gaussian distribution model of grid points; recording global map point cloud asWherein the firstThree-dimensional gridIs a gridIs arranged at the center point of the (c),Is a Gaussian distribution model;
step 2-3, utilizing ICP registration algorithm to perform current point cloud And map point cloudPreliminary alignment is carried out, and a nearest neighbor search is carried out to find the current point cloud from the map point cloud according to the Gaussian distribution errors among voxelsCorresponding points of each point in the map are used for obtaining global positioning information.
Further, in the step 2-3, the current point cloud is found from the map point cloud according to the error of the gaussian distribution among the voxelsThe corresponding points of each point of (a) are:
Designing a current point cloud And map point cloudIs a transformation model of (a)WhereinFor the transformation matrix, it is:
wherein, For the distribution mean value of adjacent voxels searched by nearest neighbor, satisfyFor distance threshold, the error distribution is obtained from the reproduction property of Gaussian distributionDesirably, itVariance ofIs thatIs used as a means for controlling the speed of the vehicle,Is thatIs a function of the variance of (a),Is thatIs a variance of (c).
Further, each single robot obtains relative position information of surrounding robots through UWB, including: and calculating the round trip flight time between each monomer and the base station through the UWB sensor, further calculating the distance between each monomer and the base station, and then calculating the relative position between each monomer based on a trilateral positioning algorithm.
Further, step 4 specifically includes:
step 4-1, according to the initial global positioning information And relative positioning informationInitialization stateSum covariance matrixWherein
Step 4-2, constructing a state prediction model and an observation prediction model based on the relative position information among the monomers, wherein the state prediction model is as followsThe observation prediction model is as follows:
Jacobian matrix according to a motion model Sum-of-error covariance matrixCalculating the covariance matrix of the error at the next momentThe method comprises the following steps: ; wherein, State transition matrix for process noise covarianceThe method comprises the following steps:
wherein, As a function of the motion,Is an observation function;
Step 4-3, using the global positioning information, calculating the Kalman gain The predicted state and covariance are updated to more accurate estimates,
The state update value is:
The covariance matrix update values are:
wherein the matrix of observations The method comprises the following steps:
and 4-4, repeating the step 4-2 and the step 4-3, and continuously predicting and updating the state and covariance of the system according to the global positioning information and the relative position information.
Further, the kalman gain is:
wherein, Is the observed noise covariance matrix.
The decentralized multi-robot cooperative positioning system comprises a data acquisition unit, a global map construction unit, a global positioning information acquisition unit, a relative position information calculation unit and a relative position information correction unit, wherein each robot acquires data through the data acquisition unit, and the global map construction unit adopts VGICP positioning algorithm to match and position the current laser radar point cloud of the robot with a global map to acquire global positioning information; the relative position information of surrounding robots is acquired by each single robot through a global positioning information acquisition unit, and the relative position information correction unit updates and corrects the relative position information of each single robot through a filtering algorithm based on the global positioning information and the relative position information.
Compared with the prior art, the invention has the following beneficial effects:
1) High precision: compared with the traditional ICP and NDT algorithms, VGICP adopts a more complex error model and optimization method, not only considers the distance between point cloud data, but also considers the normal vector of the point cloud data to obtain a more accurate and stable registration result, provides centimeter-level ranging accuracy by combining UWB ranging, and can effectively fuse and optimize the information by using a decentralizing Kalman filtering method so as to obtain a more accurate and stable positioning result;
2) The indoor and outdoor application: the decentralised multi-robot positioning system provided by the research does not depend on GPS signals, and positions through UWB ranging and global positioning algorithm VGICP among robots, so that the robots can perform accurate collaborative operation in places without GPS signals, such as the inside of a building, underground or underwater, and the like, can be registered with a pre-established global map, can further integrate with GPS, and can provide a reliable positioning solution no matter indoors or outdoors;
3) Decentralizing: compared with the existing centralization method, the method has higher reliability, stronger expandability and data security, any node fails, other nodes can continue to work, and all observation information in the cluster is not required to be integrated; compared with the existing decentralization method, the project research decentralizes the collaborative state estimation method, and the consensus optimization algorithm (step 2 and step 3) under the light-weight communication is mainly researched to deal with the challenge of the large communication data volume of the robot cluster to the communication bandwidth, and the research thinking is to reduce the transmission of the original observation data of the robot; searching an updating strategy under asynchronous communication to cope with challenges presented by the time asynchronism of the robot to information fusion, and finally realizing multi-machine collaborative state estimation of single-machine mileage estimation-multi-machine relative estimation fusion;
4) High frequency real-time performance: the decentralizing multi-robot positioning system provided by the invention can provide high-frequency real-time positioning information due to a light-weight filtering algorithm (step 4), which is very important for multi-robot collaborative exploration tasks.
Drawings
FIG. 1 is a general flow chart of the method of the present invention.
Detailed Description
The invention is further described below with reference to the drawings and examples.
With reference to fig. 1, this embodiment provides a method for decentralizing multi-robot co-location, including:
And step 1, acquiring data through a plurality of sensors, and constructing a global map by utilizing the acquired data.
Specifically, a laser radar, a vision camera and an Inertial Measurement Unit (IMU) are selected, sensor parameters are configured, sensor calibration is carried out, and a transformation matrix of each sensor relative to the center of a single body is obtained, wherein the transformation matrix comprises a rotation matrixAnd translation. In order to facilitate subsequent data fusion, environmental data are collected by using a sensor, and time synchronization and calibration are performed on the collected data, wherein the obtained data comprise: point cloud frame sequenceImage frame sequenceImu data sequenceWhereinIs the sequence length, wherein. In consideration of the environment and sensor noise contained in the acquired data, preprocessing the acquired original data, including filtering, denoising and interpolation operations, and performing format conversion and standardization on the data according to the type of the sensor and the data format, so as to improve the data quality and reduce the computational complexity, wherein the processed data is as follows: point cloud sequenceImage sequenceImu sequence. On the basis, a filtering algorithm is adopted to conduct data fusion, the geometry structure and the characteristics of the environment are estimated through an SLAM algorithm by utilizing the fused data, and a global map is constructed. And finally, carrying out optimization processing on the constructed global map, wherein the optimization processing mainly comprises smoothing processing and redundant information removal.
And 2, estimating the pose of each monomer to obtain global positioning information.
Specifically, based on VGICP (Voxelized Generalized Iterative Closest Point) positioning algorithm, matching and positioning the current laser radar point cloud of the robot and a pre-established global map, and estimating global positioning information of each robot. Firstly, the current laser radar point cloud is preprocessed by adopting filtering and downsampling, so that the calculated amount is reduced and the robustness is improved. Then, the preprocessed point cloud and the global map point cloud are voxelized, namely, the three-dimensional space is divided into small three-dimensional grids, and the point cloud data in each voxel is represented by a representative point and a statistical distribution, wherein the voxels of the current point cloud are marked asWherein the firstThree-dimensional gridIs thatThe center point of the grid is defined by the center point,Is thatGaussian distribution model for grid points. Similarly, the global map point cloud is recorded asWherein the firstThree-dimensional gridIs a gridIs arranged at the center point of the (c),Is gaussian in distribution. On the basis, the current point cloud is registered by utilizing ICP (inductively coupled plasma) and other registration algorithmsAnd map point cloudPreliminary alignment is carried out, and a nearest neighbor search is carried out to find the current point cloud from the map point cloud according to the Gaussian distribution errors among voxelsCorresponding points of each point in the table, the transformation relation is thatIs a transformation matrix. Thus, the distributed transformation error is:
Wherein the method comprises the steps of For the distribution mean value of adjacent voxels searched by nearest neighbor, satisfyIs a distance threshold and the distribution of this error is obtainable from the regenerative nature of the gaussian distribution as:
Is a matrix Is transposed, so that the errorThe log likelihood maximized transform T is:
and 3, each single robot acquires the relative position information of surrounding robots through a distance measurement technology.
Specifically, each single robot obtains relative position information of surrounding robots based on UWB (Ultra-Wideband) ranging technology. Calculating round trip time of flight (TOF) between each monomer and the base station by using UWB sensors installed on each monomer, further calculating distance between each monomer and the base station, and then calculating relative position between each monomer based on trilateral positioning algorithm
And 4, updating and correcting the relative position information of each monomer through a filtering algorithm based on the global positioning information and the relative position information.
Specifically, each single robot will be based on VGICP derived global positioning informationAnd relative positioning information obtained by UWB rangingAnd feeding back the relative position information to each single robot at a lower frequency, and updating the relative position information among multiple robots through an EKF fusion algorithm. The method comprises the following steps:
a) Based on initial global positioning information And relative positioning informationInitializing a state of the systemSum covariance matrixWherein
B) By using relative position information between the individual units, by means of a motion modelObservation modelPredicting covariance of system state, wherein state is predicted asThe observation is predicted as follows: jacobian matrix based on system motion model Sum-of-error covariance matrixCalculating the covariance matrix of the error at the next momentThe method comprises the following steps:
Wherein the method comprises the steps of For process noise covariance, a linearized state transition matrix is calculated as:
the linearized observation matrix is:
c) By calculating Kalman gain using global positioning information The predicted state and covariance are updated to more accurate estimates. Wherein the kalman gain is:
wherein, The covariance matrix is observed;
the state update value is:
The covariance matrix update values are:
d) Repeating steps b and c, and continuously predicting and updating the state and covariance of the system according to the global positioning information and the relative position information.
The embodiment also provides a decentralised multi-robot co-location system, which comprises a data acquisition unit, a global map construction unit, a global location information acquisition unit, a relative location information calculation unit and a relative location information correction unit, wherein each robot acquires data through the data acquisition unit, and the global map construction unit adopts VGICP location algorithm to match and locate the current laser radar point cloud of the robot with a global map to acquire global location information; the relative position information of surrounding robots is acquired by each single robot through a global positioning information acquisition unit, and the relative position information correction unit updates and corrects the relative position information of each single robot through a filtering algorithm based on the global positioning information and the relative position information.
Examples
The present embodiment provides a sample preparation machine: two machine dogs (without GPS);
test environment: outdoor environment (temperature is 30+/-15 ℃ and normal pressure); the test sites are clear, the environment is rich in texture, the size of the test scene is consistent with that of the black box test site, the surrounding people are far away, the light is good, and the test conditions are met. The method in the specific embodiment is used for carrying out the co-location:
And step 1, acquiring and constructing a global map of the current test environment.
And step 2, uploading the odometer information and the radar point cloud information of each machine dog to a server through a TCP/IP network protocol.
And 3, the server positions the two quadruped dogs to a pre-built global map by using the received odometer and point cloud information.
And 4, starting to exchange information between robots through a TCP/IP network protocol. Such information includes UWB measured distance information and odometry information. By using the data, the system can acquire the relative position information between two quadruped dogs through data fusion of an odometer and UWB.
The true value of the whole motion process is acquired by using a dynamic capture device, the estimated relative position information and the relative position information acquired by the dynamic capture system are recorded, and the estimated relative distance and the relative distance acquired by the dynamic capture system are calculated. The system then performs time stamp alignment on the estimated data and the data acquired by dynamic capture to evaluate the accuracy of the algorithm, and the high-precision real-time relative pose estimation is realized through evaluation. The invention is suitable for scenes such as multi-robot searching and rescuing tasks, multi-robot collaborative carrying tasks and the like.
It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims and the equivalents thereof, the present invention is also intended to include such modifications and variations.

Claims (5)

1. A method for decentralizing multi-robot co-localization, comprising the steps of:
Step 1, each robot collects data through a plurality of sensors and performs preprocessing, and a global map is built based on the collected data;
Step 2, each single robot adopts VGICP positioning algorithm to match and position the current laser radar point cloud of the robot and the global map, so as to obtain global positioning information;
step 3, each single robot obtains the relative position information of surrounding robots;
step 4, updating and correcting the relative position information of each single robot through a filtering algorithm based on the global positioning information and the relative position information;
The step 2 specifically includes:
Step 2-1, preprocessing the current laser radar point cloud by adopting filtering and downsampling;
step 2-2, voxelized the preprocessed point cloud and the global map point cloud, namely dividing the three-dimensional space into small three-dimensional grids, representing the point cloud data in each voxel by a representative point and a statistical distribution, and marking the voxels of the current point cloud as Wherein the firstThree-dimensional gridIs thatThe center point of the grid is defined by the center point,Is thatA gaussian distribution model of grid points; recording global map point cloud asWherein the firstThree-dimensional gridIs a gridIs arranged at the center point of the (c),Is a Gaussian distribution model;
step 2-3, utilizing ICP registration algorithm to perform current point cloud And map point cloudPreliminary alignment is carried out, and a nearest neighbor search is carried out to find the current point cloud from the map point cloud according to the Gaussian distribution errors among voxelsCorresponding points of each point in the map data are used for obtaining global positioning information;
In the step 2-3, searching the current point cloud from the map point cloud according to the error of the Gaussian distribution among the voxels The corresponding points of each point of (a) are:
Designing a current point cloud And map point cloudIs a transformation model of (a)WhereinFor the transformation matrix, it is:
wherein, For the distribution mean value of adjacent voxels searched by nearest neighbor, satisfyFor distance threshold, the error distribution is obtained from the reproduction property of Gaussian distributionDesirably, itVariance ofIs thatIs used as a means for controlling the speed of the vehicle,Is thatIs a function of the variance of (a),Is thatIs a variance of (2);
Each single robot obtains relative position information of surrounding robots through UWB, comprising: calculating round trip flight time between each monomer and the base station through the UWB sensor, further calculating the distance between each monomer and the base station, and then calculating the relative position between each monomer based on a trilateral positioning algorithm;
The step 4 specifically comprises the following steps:
step 4-1, according to the initial global positioning information And relative positioning informationInitialization stateSum covariance matrixWherein
Step 4-2, constructing a state prediction model and an observation prediction model based on the relative position information among the monomers, wherein the state prediction model is as followsThe observation prediction model is as follows: Wherein, the method comprises the steps of, wherein, As a function of the motion,Is an observation function;
Jacobian matrix according to a motion model Sum-of-error covariance matrixCalculating the covariance matrix of the error at the next momentThe method comprises the following steps: ; wherein, State transition matrix for process noise covarianceThe method comprises the following steps:
Step 4-3, using the global positioning information, calculating the Kalman gain The predicted state and covariance are updated to more accurate estimates,
The state update value is:
The covariance matrix update values are:
wherein the matrix of observations The method comprises the following steps:
step 4-4, repeating the step 4-2 and the step 4-3, and continuously predicting and updating the state and covariance of the system according to the global positioning information and the relative position information;
The kalman gain is:
wherein, Is the observed noise covariance matrix.
2. The de-centralized multi-robot co-location method of claim 1, wherein the plurality of sensors comprises a lidar, a vision camera, and an inertial measurement unit.
3. The de-centralized multi-robot co-location method of claim 1, wherein the preprocessing comprises:
time synchronization and calibration are carried out on the acquired data, and the obtained data comprise: point cloud frame sequence Image frame sequenceInertial measurement data sequenceWhereinIs the sequence length;
Filtering, denoising and interpolating the point Yun Zhen sequence, image frame sequence and inertial measurement data sequence data, and performing format conversion and standardization on the data according to the sensor type and data format.
4. The method for decentralized multi-robot co-location according to claim 1, wherein obtaining global positioning information comprises: data fusion is carried out by adopting a filtering algorithm, the geometry and characteristics of the environment are estimated by utilizing the fused data through a SLAM algorithm, and a global map is constructed; And finally, smoothing the constructed global map and removing redundant information.
5. The multi-robot co-location system for implementing the de-centering multi-robot co-location method of any one of claims 1-4 is characterized by comprising a data acquisition unit, a global map construction unit, a global location information acquisition unit, a relative location information calculation unit and a relative location information correction unit, wherein each robot acquires data through the data acquisition unit, and the global map construction unit adopts VGICP location algorithm to match and locate the current laser radar point cloud of the robot with a global map to acquire global location information; the relative position information of surrounding robots is acquired by each single robot through a global positioning information acquisition unit, and the relative position information correction unit updates and corrects the relative position information of each single robot through a filtering algorithm based on the global positioning information and the relative position information.
CN202410912557.6A 2024-07-09 2024-07-09 Decentralizing multi-robot co-location method and system Active CN118463996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410912557.6A CN118463996B (en) 2024-07-09 2024-07-09 Decentralizing multi-robot co-location method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410912557.6A CN118463996B (en) 2024-07-09 2024-07-09 Decentralizing multi-robot co-location method and system

Publications (2)

Publication Number Publication Date
CN118463996A CN118463996A (en) 2024-08-09
CN118463996B true CN118463996B (en) 2024-10-01

Family

ID=92153743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410912557.6A Active CN118463996B (en) 2024-07-09 2024-07-09 Decentralizing multi-robot co-location method and system

Country Status (1)

Country Link
CN (1) CN118463996B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115143958A (en) * 2022-04-28 2022-10-04 西南科技大学 Multi-sensor fusion SLAM method based on GPU acceleration
CN117606465A (en) * 2023-11-24 2024-02-27 中兵智能创新研究院有限公司 Method for simultaneous positioning and mapping of multiple robots in large-scale environment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109059942B (en) * 2018-08-22 2021-12-14 中国矿业大学 Underground high-precision navigation map construction system and method
CN113538410B (en) * 2021-08-06 2022-05-20 广东工业大学 Indoor SLAM mapping method based on 3D laser radar and UWB
CN116753945A (en) * 2023-05-29 2023-09-15 重庆大学 Navigation method of industrial inspection robot based on multi-sensor fusion
CN118093750A (en) * 2024-01-23 2024-05-28 安徽海博智能科技有限责任公司 Cloud collaboration-based shovel installation point local automatic updating global map manufacturing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115143958A (en) * 2022-04-28 2022-10-04 西南科技大学 Multi-sensor fusion SLAM method based on GPU acceleration
CN117606465A (en) * 2023-11-24 2024-02-27 中兵智能创新研究院有限公司 Method for simultaneous positioning and mapping of multiple robots in large-scale environment

Also Published As

Publication number Publication date
CN118463996A (en) 2024-08-09

Similar Documents

Publication Publication Date Title
CN112014857B (en) Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN110930495A (en) Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium
CN112965063B (en) Robot mapping and positioning method
CN114018248B (en) Mileage metering method and image building method integrating code wheel and laser radar
CN111260751B (en) Mapping method based on multi-sensor mobile robot
CN110187337B (en) LS and NEU-ECEF space-time registration-based high maneuvering target tracking method and system
JP2014523572A (en) Generating map data
CN114383611A (en) Multi-machine cooperative laser SLAM method, device and system for mobile robot
CN114429432B (en) Multi-source information layered fusion method and device and storage medium
CN116359905A (en) Pose map SLAM (selective level mapping) calculation method and system based on 4D millimeter wave radar
CN114187418A (en) Loop detection method, point cloud map construction method, electronic device and storage medium
CN117685953A (en) UWB and vision fusion positioning method and system for multi-unmanned aerial vehicle co-positioning
CN116429116A (en) Robot positioning method and equipment
WO2024120187A1 (en) Method for estimating dynamic target of unmanned aerial vehicle in information rejection environment
CN113379915B (en) Driving scene construction method based on point cloud fusion
CN117451032A (en) SLAM method and system of low-calculation-force and loose-coupling laser radar and IMU
CN113093759A (en) Robot formation construction method and system based on multi-sensor information fusion
CN115655291B (en) Method, device, mobile robot, equipment and medium for laser SLAM closed loop mapping
CN118463996B (en) Decentralizing multi-robot co-location method and system
CN116429112A (en) Multi-robot co-location method and device, equipment and storage medium
CN118363008B (en) Robot positioning scene degradation processing method, rapid positioning method and system
CN117289298B (en) Multi-machine collaborative online mapping method, system and terminal equipment based on laser radar
CN118279515B (en) Real-time multi-terminal map fusion method and device
CN118427286B (en) Geographic information processing method based on cross-medium aircraft and related products

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant