CN117842397A - Visual positioning method for high-precision cabin docking system - Google Patents
Visual positioning method for high-precision cabin docking system Download PDFInfo
- Publication number
- CN117842397A CN117842397A CN202311814013.8A CN202311814013A CN117842397A CN 117842397 A CN117842397 A CN 117842397A CN 202311814013 A CN202311814013 A CN 202311814013A CN 117842397 A CN117842397 A CN 117842397A
- Authority
- CN
- China
- Prior art keywords
- docking
- cabin
- precision
- data
- positioning method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003032 molecular docking Methods 0.000 title claims abstract description 64
- 238000000034 method Methods 0.000 title claims abstract description 40
- 230000000007 visual effect Effects 0.000 title claims abstract description 26
- 210000001503 joint Anatomy 0.000 claims abstract description 31
- 230000008569 process Effects 0.000 claims abstract description 18
- 230000005484 gravity Effects 0.000 claims abstract description 4
- 230000036544 posture Effects 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 239000004429 Calibre Substances 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 230000003993 interaction Effects 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 3
- 238000005299 abrasion Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000009916 joint effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64G—COSMONAUTICS; VEHICLES OR EQUIPMENT THEREFOR
- B64G1/00—Cosmonautic vehicles
- B64G1/22—Parts of, or equipment specially adapted for fitting in or to, cosmonautic vehicles
- B64G1/64—Systems for coupling or separating cosmonautic vehicles or parts thereof, e.g. docking arrangements
- B64G1/646—Docking or rendezvous systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01D—MEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
- G01D21/00—Measuring or testing not otherwise provided for
- G01D21/02—Measuring two or more variables by means not covered by a single other subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a visual positioning method for a high-precision cabin docking system, which comprises the following steps of: taking photos at fixed points and continuously scanning through a 3D camera, and establishing a fixed cabin digital model; according to the model data, calculating the axis direction of the end of the docking cabin and calculating the pre-docking position and the pre-docking posture of the robot; when the robot arm is in butt joint, the six-dimensional force sensor is adopted to monitor x, y, z, rx, ry, rz the pressure in six directions, data are transmitted to the computer to form a force control system, the force control system calculates the next target point according to the center of gravity data of the clamp, the coordinate offset parameters of the butt joint cabin and the sensor and the virtual work principle of a mechanical formula in combination with the force received in all directions in the butt joint process, meanwhile, the two parts interact with position data at the same speed to realize closed loop and position guidance, and when the position difference reaches a preset convergence domain and the assembly stroke accords, the position difference represents successful butt joint. The invention can improve the accuracy and the safety in the process of docking the spacecraft or the aircraft.
Description
Technical Field
The invention relates to the technical field of cabin butt joint, in particular to a visual positioning method for a high-precision cabin butt joint system.
Background
The cabin section docking system is a key technology applied to the docking process of a spacecraft or an aircraft. The conventional cabin docking system uses mechanical design or visual guidance, and certain errors and limitations exist in the docking process due to the limitations of the mechanical design and the visual guidance, so that it is difficult to accurately control the contact force, the alignment state and the like in the docking process.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a visual positioning method for a high-precision cabin docking system, which can improve the accuracy and safety of a spacecraft or an aircraft in the docking process.
The technical scheme of the invention is as follows:
a visual positioning method for a high precision cabin segment docking system, comprising the steps of:
s1, performing fixed-point photographing and continuous scanning through a 3D camera arranged at the tail end of a mechanical arm, and establishing a fixed cabin digital model;
s2, calculating the axis direction of the end of the docking cabin and calculating the pre-docking position and the pre-docking posture of the robot according to model data obtained by scanning of the 3D camera;
and S3, adopting a six-dimensional force sensor to monitor the pressure in the directions of x, y, z, rx, ry, rz by pressure and torsion when the robot arms are in butt joint, transmitting monitoring data to a computer to form a force control system, and calculating the next target point by the force control system according to the gravity center data of the clamp, coordinate offset parameters of the butt joint cabin section and the sensor and the virtual work principle of a mechanical formula in combination with the force received in each direction in the butt joint process, wherein the two parts realize closed loop and position guidance by the same speed interaction position data, and the position difference value represents that the butt joint is successful when the position difference value reaches a preset convergence domain and the assembly stroke is consistent.
Further, the fixed point positions of step S1 include a surface cable position and a cable clearance position.
Further, step S2 includes:
s21, collecting point cloud data
Acquiring point cloud data of a target object by using a 3D camera;
s22, preprocessing
Preprocessing the collected point cloud data, including filtering, downsampling and data registration, and aligning the point cloud data under multiple view angles;
s23, feature extraction
Performing feature extraction on the preprocessed point cloud data by using a deep learning technology to obtain a group of representative feature points;
s24, template making
According to the shape and structure of the target object, a 3D template similar to the target object is manufactured, and characteristic points for matching are marked on the 3D template;
s25, feature point matching
Matching the extracted characteristic points on the target object with the characteristic points on the manufactured 3D template, and finding out the characteristic points matched with the characteristic points;
s26, attitude estimation
Calculating the target gesture by using a triangulation and geometric constraint algorithm according to the matched feature points;
s27, precision verification
And (3) performing self calibration by using a calibration plate with known precision or performing mutual detection by using the gesture estimation results under a plurality of different visual angles so as to verify the precision of the gesture estimation.
Further, in step S23, the ORB algorithm is used to calculate the characteristics of the point cloud.
Further, in step S25, the RANSAC algorithm is used to match the feature points of the two point clouds to find the correspondence between them.
Further, in step S26, an ICP algorithm is used to estimate the pose matrix of the butt joint.
Further, in step S27, the camera coordinate system is converted into the robot arm coordinate system by using the calculated point cloud gesture through the hand-eye calibration matrix, so as to drive the robot arm docking process.
Further, step S27 includes:
s271, placing a calibration block at a known position;
s272, scanning the calibration block by using different mechanical arm postures to acquire point cloud data, and simultaneously recording mechanical arm posture data R;
s273, extracting characteristic corner points of the calibration block through point cloud processing, and calculating the posture T of the calibration block under a camera coordinate system;
s274, calculating a hand-eye calibration result by using a calibre hand eye function in OpenCV by using the recorded R and the calculated T;
s275, the calibration result is applied to mechanical arm control so as to convert the point cloud coordinates acquired by the 3D camera into mechanical arm coordinates.
Further, in step S3, the position data is interacted with at a speed of 250Hz to achieve closed loop and position guidance.
Compared with the prior art, the invention has the beneficial effects that: the invention introduces a force sensor technology and applies the force sensor technology to a cabin docking system, the force sensor is used for measuring the contact force and the alignment state between docking interfaces in real time, and the sensor data is analyzed and processed to realize real-time feedback and dynamic adjustment in the docking process, so that the invention has the following characteristics;
(1) the butt joint precision is improved, and the magnitude and the position of the butt joint contact force can be accurately controlled through high-precision measurement of the force sensor, so that more accurate butt joint operation is realized;
(2) the butt joint safety is improved, the system can avoid overlarge or overlarge contact force by real-time feedback and dynamic adjustment, damage or accident risk possibly caused in the butt joint process is reduced, and the butt joint safety is improved;
(3) the docking efficiency is improved, and due to accurate control and real-time feedback, the docking system based on the force sensor can complete the docking process more quickly and accurately, so that the docking efficiency and the operation rapidness are improved;
(4) the force sensor can adapt to spacecrafts or aircrafts with various shapes and sizes, is suitable for diversified docking scenes, provides self-adaptive docking control, and increases the flexibility and the application range of a docking system.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a method step diagram of a visual positioning method for a high precision cabin docking system provided by the present invention;
FIG. 2 is a flow chart of a method for step S2 of a visual positioning method for a high-precision cabin docking system provided by the invention;
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
Examples
The embodiment provides a visual positioning method for a high-precision cabin docking system, which introduces a force sensor technology for improving docking precision and safety, measures contact force and alignment state between docking interfaces in real time through a force sensor, and realizes real-time feedback and dynamic adjustment in the docking process by analyzing and processing sensor data.
A force sensor is a sensor that is capable of measuring the relative force between objects. In high precision bay docking systems, force sensors are mounted on the docking interfaces for real-time measurement of contact force and alignment between the docking interfaces. By analyzing and processing the data obtained by the force sensor, the system can adjust the docking action according to real-time feedback to achieve more accurate docking.
The high-precision cabin butt joint system based on the force sensor has the following technical characteristics:
1. high precision: the force sensor has higher measurement precision, and can monitor the tiny force change between the docking interfaces in real time, thereby providing more accurate docking control.
2. Real-time feedback: by analyzing and processing the force sensor data in real time, the system can acquire information in the butt joint process in time, and instant feedback and dynamic adjustment are realized.
3. Self-adaptive butt joint: the force sensor-based system can adjust the butt joint action according to the real-time measurement result, realize self-adaptive butt joint, and adapt to the size and shape differences of different spacecrafts or aircrafts.
4. Safety enhancement: because of accurate force measurement and real-time feedback control, the force sensor-based docking system can more safely perform docking operations, avoiding damage or accidents caused by inaccurate docking.
Referring to fig. 1, the visual positioning method includes the following steps:
s1, performing fixed-point photographing and continuous scanning through a 3D camera arranged at the tail end of a mechanical arm, and establishing a fixed cabin digital model, wherein the fixed-point position comprises a surface cable position and a cable clearance position;
s2, according to model data obtained by scanning of a 3D camera, calculating the axis direction of the end of the docking cabin and calculating the pre-docking position and the pre-docking posture of the robot, wherein the process is shown in FIG. 2 and comprises the following steps:
s21, collecting point cloud data
Acquiring point cloud data of a target object by using a 3D camera;
s22, preprocessing
Preprocessing the collected point cloud data, including filtering, downsampling, data registration and the like, so as to reduce noise and redundant data, and aligning the point cloud data under multiple view angles;
s23, feature extraction
Performing feature extraction on the preprocessed point cloud data by using a deep learning technology to obtain a group of representative feature points, and calculating the features of the point cloud by using an ORB algorithm;
s24, template making
According to the shape and structure of the target object, a 3D template similar to the target object is manufactured, and characteristic points for matching are marked on the 3D template;
s25, feature point matching
Matching the extracted characteristic points on the target object with the characteristic points on the manufactured 3D template, finding out the matched characteristic points, and particularly matching the characteristic points of the two point clouds by using a RANSAC algorithm to find out the corresponding relation between the two point clouds;
s26, attitude estimation
Calculating the target gesture by using a triangulation and geometric constraint algorithm according to the matched characteristic points, and estimating a gesture matrix of the butt joint point by using an ICP algorithm;
s27, precision verification
Self-calibration is carried out by using a calibration plate with known precision or mutual detection is carried out by using gesture estimation results under a plurality of different visual angles so as to verify the precision of gesture estimation;
in this embodiment, the calculated point cloud gesture is used to convert the camera coordinate system into the robot arm coordinate system through the hand-eye calibration matrix, so as to drive the robot arm docking process, which specifically includes:
s271, placing a calibration block at a known position;
s272, scanning the calibration block by using different mechanical arm postures to acquire point cloud data, and simultaneously recording mechanical arm posture data R;
s273, extracting characteristic corner points of the calibration block through point cloud processing, and calculating the posture T of the calibration block under a camera coordinate system;
s274, calculating a hand-eye calibration result by using a calibre hand eye function in OpenCV by using the recorded R and the calculated T;
s275, applying the calibration result to the mechanical arm control so as to convert the point cloud coordinates acquired by the 3D camera into mechanical arm coordinates;
and S3, adopting a six-dimensional force sensor to monitor the pressure in the directions of x, y, z, rx, ry, rz by pressure and torsion when the robot arms are in butt joint, transmitting monitoring data to a computer to form a force control system, and calculating the next target point by the force control system according to the gravity center data of the clamp, coordinate offset parameters of the butt joint cabin section and the sensor and the virtual work principle of a mechanical formula in combination with the force received in each direction in the butt joint process, wherein the two parts realize closed loop and position guidance by the speed interaction position data of 250Hz, and the position difference value represents the successful butt joint when reaching a preset convergence domain and the assembly stroke is consistent.
The visual positioning method has the following advantages:
1. the butt joint is accurate, and the damage of the wire harness and the abrasion of the cabin end are not caused;
2. the docking efficiency is high compared to mechanical design and visual guidance;
3. the operation is simple and convenient.
The foregoing description of the preferred embodiment of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
Claims (9)
1. A visual positioning method for a high-precision cabin docking system, comprising the steps of:
s1, performing fixed-point photographing and continuous scanning through a 3D camera arranged at the tail end of a mechanical arm, and establishing a fixed cabin digital model;
s2, calculating the axis direction of the end of the docking cabin and calculating the pre-docking position and the pre-docking posture of the robot according to model data obtained by scanning of the 3D camera;
and S3, adopting a six-dimensional force sensor to monitor the pressure in the directions of x, y, z, rx, ry, rz by pressure and torsion when the robot arms are in butt joint, transmitting monitoring data to a computer to form a force control system, and calculating the next target point by the force control system according to the gravity center data of the clamp, coordinate offset parameters of the butt joint cabin section and the sensor and the virtual work principle of a mechanical formula in combination with the force received in each direction in the butt joint process, wherein the two parts realize closed loop and position guidance by the same speed interaction position data, and the position difference value represents that the butt joint is successful when the position difference value reaches a preset convergence domain and the assembly stroke is consistent.
2. A visual positioning method for a high precision cabin segment docking system according to claim 1, wherein the setpoint position of step S1 comprises a surface cable position and a cable clearance position.
3. A visual positioning method for a high precision cabin segment docking system according to claim 1, wherein step S2 comprises:
s21, collecting point cloud data
Acquiring point cloud data of a target object by using a 3D camera;
s22, preprocessing
Preprocessing the collected point cloud data, including filtering, downsampling and data registration, and aligning the point cloud data under multiple view angles;
s23, feature extraction
Performing feature extraction on the preprocessed point cloud data by using a deep learning technology to obtain a group of representative feature points;
s24, template making
According to the shape and structure of the target object, a 3D template similar to the target object is manufactured, and characteristic points for matching are marked on the 3D template;
s25, feature point matching
Matching the extracted characteristic points on the target object with the characteristic points on the manufactured 3D template, and finding out the characteristic points matched with the characteristic points;
s26, attitude estimation
Calculating the target gesture by using a triangulation and geometric constraint algorithm according to the matched feature points;
s27, precision verification
And (3) performing self calibration by using a calibration plate with known precision or performing mutual detection by using the gesture estimation results under a plurality of different visual angles so as to verify the precision of the gesture estimation.
4. A visual positioning method for a high precision cabin segment docking system according to claim 1, characterized in that in step S23, the ORB algorithm is used to calculate the characteristics of the point cloud.
5. The visual positioning method for a high-precision cabin docking system according to claim 1, wherein in step S25, the feature points of the two point clouds are matched using RANSAC algorithm to find the correspondence between them.
6. A visual positioning method for a high precision cabin segment docking system according to claim 1, characterized in that in step S26, an ICP algorithm is used to estimate the pose matrix of the docking points.
7. The visual positioning method for a high-precision cabin docking system according to claim 1, wherein in step S27, the camera coordinate system is converted into the robot arm coordinate system by the hand-eye calibration matrix by using the calculated point cloud gesture, so as to drive the robot arm docking process.
8. The visual positioning method for a high-precision cabin segment docking system of claim 7, wherein step S27 comprises:
s271, placing a calibration block at a known position;
s272, scanning the calibration block by using different mechanical arm postures to acquire point cloud data, and simultaneously recording mechanical arm posture data R;
s273, extracting characteristic corner points of the calibration block through point cloud processing, and calculating the posture T of the calibration block under a camera coordinate system;
s274, calculating a hand-eye calibration result by using a calibre hand eye function in OpenCV by using the recorded R and the calculated T;
s275, the calibration result is applied to mechanical arm control so as to convert the point cloud coordinates acquired by the 3D camera into mechanical arm coordinates.
9. A visual positioning method for a high precision cabin segment docking system according to claim 1, characterized in that in step S3, closed loop and position guidance is achieved by interacting position data at a speed of 250 Hz.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311814013.8A CN117842397A (en) | 2023-12-26 | 2023-12-26 | Visual positioning method for high-precision cabin docking system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311814013.8A CN117842397A (en) | 2023-12-26 | 2023-12-26 | Visual positioning method for high-precision cabin docking system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117842397A true CN117842397A (en) | 2024-04-09 |
Family
ID=90547410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311814013.8A Pending CN117842397A (en) | 2023-12-26 | 2023-12-26 | Visual positioning method for high-precision cabin docking system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117842397A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118456449A (en) * | 2024-07-09 | 2024-08-09 | 空间液态金属科技发展(江苏)有限公司 | Real-time interaction method for docking state of mechanical arm and load adapter |
-
2023
- 2023-12-26 CN CN202311814013.8A patent/CN117842397A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118456449A (en) * | 2024-07-09 | 2024-08-09 | 空间液态金属科技发展(江苏)有限公司 | Real-time interaction method for docking state of mechanical arm and load adapter |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110116407B (en) | Flexible robot position and posture measuring method and device | |
CN107186708B (en) | Hand-eye servo robot grabbing system and method based on deep learning image segmentation technology | |
CN110421562B (en) | Mechanical arm calibration system and calibration method based on four-eye stereoscopic vision | |
CN112060085B (en) | Robot operation pose control method based on visual-touch multi-scale positioning | |
JP2022039906A (en) | Multi-sensor combined calibration device and method | |
JP6180086B2 (en) | Information processing apparatus and information processing method | |
CN108214487B (en) | Robot target positioning and grabbing method based on binocular vision and laser radar | |
CN112518748B (en) | Automatic grabbing method and system for visual mechanical arm for moving object | |
CN103895042A (en) | Industrial robot workpiece positioning grabbing method and system based on visual guidance | |
JP6324025B2 (en) | Information processing apparatus and information processing method | |
CN112010024B (en) | Automatic container grabbing method and system based on laser and vision fusion detection | |
CN103020952A (en) | Information processing apparatus and information processing method | |
Melchiorre et al. | Collison avoidance using point cloud data fusion from multiple depth sensors: a practical approach | |
CN113172659B (en) | Flexible robot arm shape measuring method and system based on equivalent center point identification | |
CN113189950B (en) | Double-robot cooperative flexible assembly and adjustment method for assembling large weak-rigidity structural member | |
CN109540105A (en) | A kind of courier packages' grabbing device and grasping means based on binocular vision | |
CN117842397A (en) | Visual positioning method for high-precision cabin docking system | |
CN114454174A (en) | Mechanical arm motion capturing method, medium, electronic equipment and system | |
Han et al. | Grasping control method of manipulator based on binocular vision combining target detection and trajectory planning | |
CN110720113A (en) | Parameter processing method and device, camera equipment and aircraft | |
CN111702787B (en) | Man-machine cooperation control system and control method | |
Nakhaeinia et al. | Adaptive robotic contour following from low accuracy RGB-D surface profiling and visual servoing | |
CN111699445A (en) | Robot kinematics model optimization method and system and storage device | |
CN113733078B (en) | Method for interpreting fine control quantity of mechanical arm and computer-readable storage medium | |
KR102726140B1 (en) | Calibration system and method using the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |