CN116499456B - Automatic positioning device and method for mobile robot and positioning system for unmanned mower - Google Patents
Automatic positioning device and method for mobile robot and positioning system for unmanned mower Download PDFInfo
- Publication number
- CN116499456B CN116499456B CN202310773857.6A CN202310773857A CN116499456B CN 116499456 B CN116499456 B CN 116499456B CN 202310773857 A CN202310773857 A CN 202310773857A CN 116499456 B CN116499456 B CN 116499456B
- Authority
- CN
- China
- Prior art keywords
- camera
- pattern
- pose
- mobile robot
- mark
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000012545 processing Methods 0.000 claims abstract description 27
- 239000011159 matrix material Substances 0.000 claims description 54
- 238000004364 calculation method Methods 0.000 claims description 41
- 238000003909 pattern recognition Methods 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 9
- 238000003708 edge detection Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 239000003550 marker Substances 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 description 5
- 125000004122 cyclic group Chemical group 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000004438 eyesight Effects 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000016776 visual perception Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01D—HARVESTING; MOWING
- A01D34/00—Mowers; Mowing apparatus of harvesters
- A01D34/006—Control or measuring arrangements
- A01D34/008—Control or measuring arrangements for automated or remotely controlled operation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/10544—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
- G06K7/10712—Fixed beam scanning
- G06K7/10722—Photodetector array or CCD scanning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
- G06K7/1417—2D bar codes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Health & Medical Sciences (AREA)
- Toxicology (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Environmental Sciences (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Harvester Elements (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a mobile robot automatic positioning device, a method and an unmanned mower positioning system, wherein the positioning device is carried on the mobile robot and comprises: a look-around camera comprising a plurality of cameras for capturing images of a plurality of markers pre-arranged at different locations in a current stationary running scene; each mark is arranged in the field of view of the looking-around camera, and different preset characteristic patterns are respectively arranged on each mark in more than one different viewing angle directions; and the processing unit is used for receiving the images of the marks acquired by each camera in the looking-around camera, identifying the characteristic patterns on each mark, and calculating the current pose output of the mobile robot according to the identified characteristic patterns on each mark. The invention has the advantages of simple structure, low cost, high positioning efficiency, high precision, safety, reliability and the like.
Description
Technical Field
The invention relates to the technical field of ground mobile robot positioning, in particular to an automatic mobile robot positioning device and method and an unmanned mower positioning system.
Background
For a ground mobile robot, how to position the robot with high precision and low cost and sense the environment are the precondition for the robot to realize autonomous movement. Thus, instant mapping and localization (SLAM) technology is currently of great interest, especially localization technology based on lidar and visual perception.
The heart of visual perception is a CCD or CMOS based camera. According to the different view angle ranges, application scenes and the like, various cameras including monocular cameras, binocular cameras, fish eyes, looking around cameras and the like can be used for positioning, drawing building, semantic analysis and the like, wherein the looking around cameras have large view angles and the like, and are widely applied to security protection, automobile auxiliary driving, consumption, robots and the like. The panoramic camera is usually formed by arranging a plurality of cameras at a plurality of points such as the front, the rear, the left, the right and the like of a vehicle, or by adopting a plurality of cameras to be arranged in a concentrated mode (a sensor normal vector) in a non-parallel mode, and is typically like a FLIR ladybug3 panoramic system.
In order to form a panoramic or panoramic system, in the prior art, images obtained by a plurality of cameras are spliced, natural environment features in the panoramic image are identified, overlapping parts among the cameras are removed, and finally a complete non-overlapping panoramic or panoramic image is formed. The natural environment feature recognition and the splicing process are high in calculation intensity and complexity, so that the natural environment feature recognition and the splicing process are required to be processed by a high-performance processor, the realization cost is high, the calculation amount is large, if the position of the robot is further positioned by using the looking-around system, the positioning is complex, the positioning efficiency is low, and the positioning precision and the positioning efficiency are difficult to consider.
For ground mobile robots such as unmanned mowers and sweeping robots, the ground mobile robots generally operate in a certain fixed environment, namely the operation scene is fixed, and the fixed operation scene can provide a certain priori information for the ground mobile robots in the positioning process. However, in the prior art, for the positioning of the ground mobile robot in the fixed operation scene, the traditional positioning mode based on visual perception is generally directly adopted, namely, the camera is adopted to collect the surrounding environment image of the robot, the characteristics in the natural environment are extracted, and the pose of the robot is finally calculated by means of the SLAM algorithm, so that the following problems exist in the positioning mode of the robot:
1. the feature extraction uses the feature in the natural environment of the image, but the feature points of the natural environment are rich and changeable, and a large amount of calculation is needed, on the other hand, in the actual environment, the natural feature is easily affected by the environment and the like, so that the problems of instability, unreliability and the like exist.
2. Image stitching is required when using a look-around camera, resulting in the need to use a high performance processor for extensive data processing.
In summary, in the prior art, aiming at the positioning mode of the ground mobile robot, the implementation is complex, the required calculation amount is large, the characteristic of fixing the operation scene of the ground mobile robot is not fully utilized, the positioning efficiency and the precision are low, the positioning reliability is not high, and if a high-performance processor is adopted to improve the positioning efficiency and the precision, the implementation cost is greatly increased.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems existing in the prior art, the invention provides the automatic mobile robot positioning device and method with simple structure, low cost, high positioning efficiency and precision, safety and reliability and the unmanned mower positioning system.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
an automatic positioning device of a mobile robot, comprising:
a look-around camera mounted on the mobile robot, comprising a plurality of cameras having different angles of view for capturing images of a plurality of markers pre-arranged at different positions in a current fixed running scene; each mark is arranged in the field of view of the looking-around camera, and different preset feature code patterns are respectively arranged on each mark in more than two different viewing angle directions;
and the processing unit is used for receiving the images of the marks acquired by each camera in the looking-around camera, identifying the characteristic code patterns on each mark, and calculating the current pose output of the mobile robot according to the identified characteristic code patterns on each mark.
Further, each mark comprises more than one plane, each plane is provided with a different characteristic pattern, the arrangement distance between the marks is configured such that the size of the image of the pattern on the mark in the camera is not smaller than the detectable size of the camera, and the marks are arranged such that at least 1 mark can be detected simultaneously by more than two cameras in the looking-around camera.
Further, the processing unit comprises a pattern recognition module, a single pattern pose calculation module and a robot pose calculation module, wherein the pattern recognition module is used for receiving marked images acquired by all cameras in the looking-around camera, recognizing feature code patterns on all the marks, outputting the recognized feature patterns to the single pattern pose calculation module, the single pattern pose calculation module calculates pose information of all the recognized patterns relative to the corresponding cameras according to the recognized feature images, and the robot pose calculation module calculates the current pose of the mobile robot according to the pose information of all the patterns relative to the corresponding cameras.
Further, the pattern recognition module includes:
the edge detection unit is used for carrying out edge detection on the acquired image to obtain edge characteristics of the image;
the pattern feature detection unit is used for matching the detected edge features according to the pattern features stored in the pre-constructed feature pattern library and screening out matched mark pattern features;
the quadrilateral detection unit is used for carrying out quadrilateral detection on the detected edge characteristics to obtain quadrilateral characteristics of the image;
and the coordinate calculation unit is used for calculating the identification of the identified marking pattern and the coordinates of the four vertexes of the corresponding quadrangle.
Further, the single pattern pose calculation module includes:
the first calculation unit is used for calculating a homography matrix according to the mark of the mark pattern and the coordinates of four vertexes of the corresponding quadrangle recognized by the pattern recognition module;
and the second calculation unit is used for calculating pose information of each pattern relative to the corresponding camera according to the camera internal parameters and the calculated homography matrix.
An automatic positioning method of a mobile robot comprises the following steps:
s01, acquiring images of a plurality of marks which are arranged at different positions in a current fixed operation scene in advance, wherein each mark is arranged in the field of view of the looking-around camera in advance, and different preset feature code patterns are respectively arranged on each mark in more than one view angle direction;
s02, receiving images of marks acquired by all cameras in the looking-around camera, identifying the feature code patterns on all the marks, and calculating the current pose output of the mobile robot according to the identified feature code patterns on all the marks.
Further, the step S02 includes:
s201, receiving images of marks acquired by all cameras in the looking-around camera, identifying the feature code patterns on all the marks, and outputting the identified feature patterns;
s202, calculating pose information of each identified pattern relative to a corresponding camera according to the identified characteristic images;
s203, calculating the current pose of the mobile robot according to pose information of each pattern relative to the corresponding camera.
Further, the step S202 includes:
s221, calculating a homography matrix according to the mark of the mark pattern and coordinates of four vertexes of the corresponding quadrangle recognized by the pattern recognition module;
s222, calculating pose information of each pattern relative to the corresponding camera according to the camera internal parameters and the homography matrix.
Further, in step S221, a homography matrix is calculated according to the following formula;
wherein ,matrix of quadrilateral vertex coordinates of Mj-th pattern of tk moment relative to Ci-th camera, +.>Coordinates of four vertices of a quadrilateral, +.>Normalized position matrix of four vertexes of Mj-th pattern in pattern coordinate system;
in the step S222, the euclidean space conversion matrix containing pose information of the mtj-th pattern at the tk moment relative to the Ci-th camera is calculated according to the following methodTo obtain pose information of each pattern relative to the corresponding camera;
wherein s is a scale factor,i=1, I represents the number of cameras in the looking-around camera, j=1, J represents the number of marks that can be detected by the looking-around camera, and a represents the pose extraction matrix.
Further, in the step S203, the current pose of the mobile robot is calculated according to the following formula:
wherein ,euclidean space conversion matrix (including pose information) of Mj-th pattern relative to Ci-th camera at tk time>For Euclidean space conversion matrix containing Ci camera pose information under tk moment world coordinate system, < ->For Euclidean space conversion matrix containing mobile robot v pose information under tk moment world coordinate system, < ->Is the Euclidean space transformation matrix between the main camera C1 and the mobile robot v, +.>For the pose matrix of the other cameras Ci relative to the main camera C1 +.>Euclidean space transformation matrix containing pose information for pattern Mj under world coordinate system, < ->For the TK moment noise covariance matrix, J represents the number of marks that can be detected by the look-around camera and TK represents the moment.
The unmanned mower positioning system comprises the automatic positioning device, wherein the mobile robot is an unmanned mower, and the looking-around camera is carried on the unmanned mower; or the unmanned mower positioning system comprises a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for executing the computer program to execute the automatic positioning method.
Compared with the prior art, the invention has the advantages that: according to the invention, the looking-around camera is arranged for collecting the images of the marks which are arranged in advance at different positions in the current fixed operation scene, the processing unit is used for identifying the images of the marks collected by each camera in the looking-around camera, the fixed operation environment characteristics of the mobile robot can be fully utilized, the positioning of the mobile robot can be accurately realized by utilizing the advantages of the looking-around camera, meanwhile, the image stitching process is avoided, only specific patterns on the marks which are arranged in advance in the fixed operation environment are required to be identified during the image processing, the data processing capacity can be greatly reduced, the positioning efficiency can be improved on the premise of ensuring the positioning accuracy, the data processing capacity can be further reduced without identifying the natural characteristics in the images, and the rapid and accurate automatic positioning of the mobile robot can be realized.
Drawings
Fig. 1 is a schematic structural view of an automatic positioning device for a mobile robot in embodiment 1 of the present invention.
Fig. 2 is a schematic diagram of a cyclic camera arrangement in embodiment 1 of the present invention.
Fig. 3 is a schematic diagram of a marking structure in embodiment 1 of the present invention.
Fig. 4 is a schematic diagram of a two-dimensional code-based marking in embodiment 1 of the present invention.
Fig. 5 is a schematic diagram of the principle of the marking arrangement in embodiment 1 of the present invention.
Fig. 6 is a schematic flow chart of implementing automatic positioning of a mobile robot in embodiment 1 of the present invention.
Fig. 7 is a schematic view of the structure of the unmanned mower in embodiment 2 of the present invention.
Detailed Description
The invention is further described below in connection with the drawings and the specific preferred embodiments, but the scope of protection of the invention is not limited thereby.
Example 1:
as shown in fig. 1, the automatic positioning device for a mobile robot of the present embodiment includes:
the camera is used for acquiring images of a plurality of marks which are arranged at different positions in the current fixed operation scene in advance; each mark is arranged in the field of view of the looking-around camera, and different preset characteristic patterns are respectively arranged on each mark in more than one view angle direction;
and the processing unit is used for receiving the images of the marks acquired by each camera in the looking-around camera, identifying the characteristic patterns on each mark, and calculating the current pose output of the mobile robot according to the identified characteristic patterns on each mark.
According to the method, the processing unit is used for identifying the images of the marks collected by each camera in the looking-around camera by setting the looking-around camera to be used for collecting the images of the marks which are arranged at different positions in the current fixed operation scene in advance, the positions of the marks in the fixed operation environment are fixed, different preset feature code patterns are arranged in different view angles of the marks, the patterns of the marks which can be collected by the mobile robot at different positions are different, so that the current pose output of the mobile robot can be calculated according to the feature code patterns of the identified marks, the characteristic of the fixed operation environment of the mobile robot can be fully utilized, the positioning of the mobile robot can be accurately realized by utilizing the advantages of the looking-around camera, the image splicing process is avoided, only the specific patterns on the marks which are arranged in advance in the fixed operation environment are required to be identified during the image processing, the data processing amount can be greatly reduced, the positioning efficiency is improved on the premise that the positioning accuracy is ensured, and the natural feature code patterns in the images are not required to be identified, so that the data of the mobile robot can be further and the mobile robot can be accurately positioned.
In this embodiment, the pattern on the mark may specifically be a feature code such as a two-dimensional code. The feature code is easy to identify through image processing due to specific patterns, colors and the like, so that the requirement on the computing performance is low. According to the embodiment, the environment can be further structured by means of feature code pattern recognition, the adaptability in the environment is improved, meanwhile, different feature codes are distributed at different positions of different visual angles of the same mark, so that the mobile robot can collect the marked patterns at all angles, the position of the robot can be accurately positioned based on different patterns, and the problem that forward positioning accuracy is lower than lateral positioning error accuracy due to the fact that the position of the robot is only in front of the robot in the prior art can be avoided.
As shown in fig. 1, the cyclic vision camera in this embodiment is implemented by using M cameras (cameras 1 to M), each camera is configured such that the respective fields of view in the cyclic vision directions are different, M > =2, and the cyclic vision system is formed by setting a plurality of cameras so as to increase the horizontal field angle of the ground mobile robot. Preferably, the overall view angle of each of the cameras in the pan-around camera is configured to be greater than 180 degrees to enable full and complete acquisition of each marker within the work area. A plurality of marks are arranged in a working area of the ground mobile robot in advance, each mark comprises a plurality of specific patterns according to requirements, the patterns can adopt feature codes such as two-dimensional codes and the like, and the combination of a looking-around camera and mark arrangement ensures that natural features are not required to be identified in image processing, pose calculation is only required to be carried out on the identified specific feature code patterns, and therefore required calculation amount can be greatly reduced.
In a specific application embodiment, as shown in fig. 2, the looking-around camera may be implemented by using 4 identical cameras, where each camera is respectively disposed on the front, rear, left and right sides of the mobile robot, the field angle of the looking-around camera in the looking-around direction is the sum of the field angles of the 4 cameras in the looking-around direction, and the field ranges between the cameras may or may not overlap, and the looking-around direction is perpendicular to the ground, so as to ensure that the looking-around camera can see more artificial marks as much as possible. Arranging the looking-around cameras in the above manner can allow four cameras to completely cover a 360 ° viewing angle range.
In a specific application embodiment, as shown in fig. 3, the marks are formed by 1-4 feature patterns, and 4 different feature pattern marks can be formed, where (a) - (d) in fig. 3 respectively represent marks comprising one to four feature pattern surfaces. In another embodiment, as shown in fig. 4, the marks are formed by using 1 to 3 two-dimensional codes respectively with the two-dimensional codes as the preset patterns on the marks, so that 3 different characteristic pattern marks can be formed. Since the view angle direction of each pattern on each mark is different, for example, in fig. 4 (a) only has two-dimensional code patterns on one view angle plane, in fig. 4 (b) has different two-dimensional code patterns on two view angle planes, in fig. 4 (c) has different two-dimensional code patterns on three different view angle planes, when the mobile robot is at different positions, the mark patterns which can be collected are different, and the relative pose of the robot and the pattern can be reversely deduced based on the collected patterns.
In this embodiment, each mark comprises more than one plane, each plane is provided with a different characteristic pattern, the arrangement distance between the marks is configured such that the size of the image of the pattern on the mark in the camera is not smaller than the detectable size of the camera, and the marks are arranged such that more than two cameras in the looking-around camera can detect at least 1 mark at the same time, so as to ensure that the looking-around camera can effectively detect the mark.
Preferably, each mark is respectively arranged at each corner position and a central area in the current fixed running scene, the mark arranged at the corner position is provided with more than two view planes for setting more than two different preset feature code patterns, and the mark arranged at the central area is provided with more than four view planes for setting more than four different preset feature code patterns. Through the above-mentioned mark arrangement mode, the mobile robot can be ensured to collect the marked image in any working area (in a fixed operation scene), and then the pose can be calculated based on the recognized marked pattern. It will be appreciated that the individual markers may not be limited to being placed in corners or central areas, for example if the scene area is small, it may be unnecessary to place markers in central areas and markers in corner locations. Correspondingly, the number of the characteristic patterns arranged on the marks placed in the central area can be 1 pattern or a plurality of characteristic patterns, and the characteristic patterns can be specifically configured according to actual requirements.
In a specific application embodiment, as shown in fig. 5, the looking-around camera is mounted on a ground mobile robot, the ground mobile robot moves in a fixed ground environment outside a room, the ground environment is basically horizontal, a working area of the ground mobile robot is quadrilateral (the range is shown in a solid line frame), marks with two specific patterns, namely double-sided marks, are respectively arranged on 4 vertexes of the working area, and a mark with 4 specific patterns, namely four-sided marks, is arranged in a central area of the quadrilateral. When the mobile robot moves in the working area, different patterns of the marks can be acquired due to different relative angle relations between the mobile robot and the marks, and finally the positions and the postures of the robot can be calculated by combining the recognized patterns of the marks.
It is understood that the specific layout position of each mark can be configured according to the range size and shape characteristics of the actual working area of the mobile robot, for example, if the working area is polygonal and irregular, more than one two-sided marks can be respectively arranged at each corner position and each corner position, and one or more four-sided marks are further arranged at the central area, so that the mobile robot can acquire images of each mark at each position in the working area.
In this embodiment, the processing unit includes a pattern recognition module, a single pattern pose calculation module, and a robot pose calculation module, where the pattern recognition module is configured to receive images of marks collected by each camera in the looking-around camera, recognize feature code patterns on each mark, output the recognized feature patterns to the single pattern pose calculation module, and the single pattern pose calculation module calculates pose information of each recognized pattern relative to the corresponding camera according to the recognized feature images, and the robot pose calculation module calculates a current pose of the mobile robot according to pose information of each pattern relative to the corresponding camera.
In this embodiment, the pattern recognition module specifically includes:
the edge detection unit is used for carrying out edge detection on the acquired image to obtain edge characteristics of the image;
the pattern feature detection unit is used for matching the detected edge features according to the pattern features stored in the pre-constructed feature pattern library and screening out matched mark pattern features;
the quadrilateral detection unit is used for carrying out quadrilateral detection on the detected edge characteristics to obtain quadrilateral characteristics of the image;
and the coordinate calculation unit is used for calculating the identification of the identified marking pattern and the coordinates of the four vertexes of the corresponding quadrangle.
In this embodiment, the single-pattern pose calculation module specifically includes:
the first calculation unit is used for calculating a homography matrix according to the mark of the mark pattern and the coordinates of four vertexes of the corresponding quadrangle recognized by the pattern recognition module;
and the second calculating unit is used for calculating pose information of each pattern relative to the corresponding camera according to the camera internal parameters and the actual size of the mark.
In this embodiment, the first calculation unit calculates the homography matrix by linear transformation specifically as follows;
wherein ,is composed of quadrilateral vertex coordinates of Mj-th pattern of tk moment relative to the Ci-th camera>Matrix (S)>Coordinates of four vertices of a quadrilateral, +.>Normalized position matrix of four vertices in pattern coordinate system for Mj-th pattern, wherein for square pattern, matrix。
In this embodiment, the second calculation unit specifically calculates the euclidean space conversion matrix including pose information of the mtj-th pattern at the tk time relative to the Ci-th camera according to the following methodTo obtain pose information of each pattern relative to the corresponding camera;
wherein s is a scale factor,i=1, I represents the number of cameras in the looking-around camera, j=1, J represents the number of marks that can be detected by the looking-around camera, a represents the pose extraction matrix, and preferably ∈>. Above->Is a known parameter, and can be obtained by pre-calibration.
In this embodiment, the robot pose calculation module calculates the current pose of the mobile robot according to the following (3) - (8) according to the pose information of each pattern relative to the corresponding camera:
wherein ,euclidean space conversion matrix (including pose information) of Mj-th pattern relative to Ci-th camera at tk time>For Euclidean space conversion matrix containing Ci camera pose information under tk moment world coordinate system, < ->For Euclidean space conversion matrix containing mobile robot v pose information under tk moment world coordinate system, < ->The camera with the main axis consistent with the advancing direction of the robot can be specifically selected by the main camera for the Euclidean space conversion matrix between the main camera C1 and the mobile robot v, and the pose of the main camera is finally used as the pose of the corresponding looking-around camera. />For the pose matrix of other cameras Ci than the main camera with respect to the main camera C1, +.>Euclidean space transformation matrix containing pose information for pattern Mj under world coordinate system, < ->For the TK moment noise covariance matrix, J represents the number of marks that can be detected by the look-around camera and TK represents the moment. Above->、/>Is a known parameter, and can be obtained by pre-calibration.
The processing unit can be mounted on the mobile robot together with the looking-around camera, for example, the processing unit and the looking-around camera are integrated to form an automatic positioning integrated device, or of course, the processing unit can be independently arranged at the processor end, for example, the remote control end, so that data processing is carried at the remote control end, the volume and the weight of the mobile robot end are reduced, and the processing unit can be specifically configured according to actual requirements.
The mobile robot automatic positioning method in this embodiment specifically includes the steps of:
s01, acquiring images of a plurality of marks which are arranged at different positions in a current fixed operation scene in advance, wherein each mark is arranged in the field of view of the looking-around camera in advance, and different preset feature code patterns are respectively arranged on each mark in more than one view angle direction;
s02, receiving images of marks acquired by all cameras in the looking-around camera, identifying the feature code patterns on all the marks, and calculating the current pose output of the mobile robot according to the identified feature code patterns on all the marks.
In this embodiment, the specific steps of the step S02 include:
s201, receiving images of marks acquired by all cameras in the looking-around camera, identifying the feature code patterns on all the marks, and outputting the identified feature patterns;
s202, calculating pose information of each identified pattern relative to a corresponding camera according to the identified characteristic images;
s203, calculating the current pose of the mobile robot according to pose information of each pattern relative to the corresponding camera.
In this embodiment, step S202 specifically includes:
s221, calculating a homography matrix according to the mark of the mark pattern and coordinates of four vertexes of the corresponding quadrangle recognized by the pattern recognition module;
s222, calculating pose information of each pattern relative to the corresponding camera according to the camera internal parameters and the homography matrix.
In step S221 of the present embodiment, the method is specifically described as formula (1), namelyCalculating pose information of each pattern relative to a corresponding camera; in step S222, the method is specifically according to formula (2), i.e. +.>Calculating Euclidean space conversion matrix (I) containing pose information of Mj-th pattern at tk moment relative to Ci-th camera>To obtainPose information of each pattern relative to the corresponding camera is taken. Step S203 is to obtain pose information of each pattern relative to the corresponding cameraCalculating the current pose of the mobile robot according to the steps (3) - (8)>。
It is understood that, besides the above formulas (1) - (8), other calculation formulas may be adopted to calculate pose information according to actual requirements.
In the specific application embodiment, as shown in fig. 6, firstly, a mark is arranged according to step S01, after the mobile robot controls the looking-around camera to acquire an image in the running process, camera data is read, camera data is detected, whether the image contains a preset pattern in the mark or not is detected, if yes, the pose of the pattern relative to the camera is calculated according to formulas (1) and (2), the above operation is circularly executed, and the current pose of the mobile robot can be calculated according to formulas (3) to (8) by integrating the poses of the patterns relative to the camera, so that real-time positioning is realized.
According to the embodiment, the fixed operation scene where the mobile robot is located can be fully utilized through the steps, the automatic positioning of the mobile robot is quickly realized by combining preset marks and the looking-around camera, an image splicing process is not needed, a complex data processing process is not needed, and the positioning efficiency and accuracy can be greatly improved.
Example 2:
the present embodiment is to apply the automatic positioning device of embodiment 1 to an unmanned mower, that is, the mobile robot is an unmanned mower, so that automatic positioning of the unmanned mower in real time can be achieved. As shown in fig. 7, the positioning system of the unmanned mower includes the automatic positioning device described in embodiment 1, that is, the automatic positioning device specifically includes an looking-around camera composed of a plurality of (M) cameras and a processing unit, each camera includes a lens and a sensing unit, the processing unit specifically includes a plurality of ISP (image signal processing) units and a camera pose calculating unit, each ISP unit is used for respectively processing an image of each camera to identify a pattern on a mark, the camera pose calculating unit respectively receives identification data of each ISP unit, calculates pose information output of the unmanned mower, and realizes automatic positioning of the unmanned mower in real time. The calculation method of the camera pose calculation unit is as shown in the formulas (1) to (8) in the embodiment 1, and other calculation methods may be adopted.
In this embodiment, the system further includes an inertial measurement unit and a mileage monitoring unit, the inertial measurement unit is used for measuring angular velocity and acceleration of the unmanned mower, the mileage monitoring unit is used for monitoring mileage data of the unmanned mower, and the positioning system calculates and obtains final fusion pose information output of the unmanned mower according to the identified feature code patterns on the marks and data output by the Inertial Measurement Unit (IMU) and the mileage monitoring unit. According to the embodiment, the unmanned mower is further positioned by combining the angular speed and the acceleration measured by the inertia measuring unit and the mileage data monitored by the mileage monitoring unit on the basis of obtaining the marking pattern recognition data, so that the positioning precision can be further improved, the unmanned mower can be ensured to acquire high-precision positioning information in real time in the operation process, and the unmanned mower precision and the safety and reliability are ensured.
The foregoing is merely a preferred embodiment of the present invention and is not intended to limit the present invention in any way. While the invention has been described with reference to preferred embodiments, it is not intended to be limiting. Therefore, any simple modification, equivalent variation and modification of the above embodiments according to the technical substance of the present invention shall fall within the scope of the technical solution of the present invention.
Claims (6)
1. An automatic positioning device for a mobile robot, comprising:
the camera is used for acquiring images of a plurality of marks which are arranged at different positions in the current fixed operation scene in advance; each mark is arranged in the field of view of the looking-around camera, and different preset characteristic patterns are respectively arranged on each mark in more than one view angle direction;
the processing unit is used for receiving the images of the marks acquired by each camera in the looking-around camera, identifying the characteristic patterns on each mark, and calculating the current pose output of the mobile robot according to the identified characteristic patterns on each mark;
the processing unit comprises a pattern recognition module, a single pattern pose calculation module and a robot pose calculation module, wherein the pattern recognition module is used for receiving marked images acquired by all cameras in the looking-around camera, recognizing the feature code patterns on all the marks and outputting the recognized feature patterns to the single pattern pose calculation module, the single pattern pose calculation module calculates pose information of each recognized pattern relative to the corresponding camera according to the recognized feature images, and the robot pose calculation module calculates the current pose of the mobile robot according to the pose information of each pattern relative to the corresponding camera;
the single pattern pose calculation module comprises:
the first calculation unit is used for calculating a homography matrix according to the mark of the mark pattern and the coordinates of four vertexes of the corresponding quadrangle recognized by the pattern recognition module;
the second calculation unit is used for calculating pose information of each pattern relative to the corresponding camera according to the camera internal parameters and the calculated homography matrix;
the first calculation unit calculates a homography matrix according to the following specific steps;
wherein ,matrix of quadrilateral vertex coordinates of Mj-th pattern of tk moment relative to Ci-th camera, +.>Coordinates of four vertices of a quadrilateral, +.>Normalized position matrix of four vertexes of Mj-th pattern in pattern coordinate system;
in the second calculation unit, the Euclidean space conversion matrix containing pose information of the Mj-th pattern at tk moment relative to the Ci-th camera is calculated specifically according to the following methodTo obtain pose information of each pattern relative to the corresponding camera;
wherein s is a scale factor,i=1, I represents the number of cameras in the looking-around camera, j=1, J represents the number of marks that can be detected by the looking-around camera, and a represents the pose extraction matrix.
2. The mobile robotic automatic positioning device of claim 1, wherein each marker comprises more than one plane, each plane is provided with a different characteristic pattern, a set distance between the markers is configured such that a size of an image of the pattern on the markers within the camera is not less than a detectable size of the camera, and the markers are arranged such that at least 1 marker can be detected simultaneously by more than two of the looking around cameras.
3. The mobile robotic automatic positioning device of claim 1, wherein the pattern recognition module comprises:
the edge detection unit is used for carrying out edge detection on the acquired image to obtain edge characteristics of the image;
the pattern feature detection unit is used for matching the detected edge features according to the pattern features stored in the pre-constructed feature pattern library and screening out matched mark pattern features;
the quadrilateral detection unit is used for carrying out quadrilateral detection on the detected edge characteristics to obtain quadrilateral characteristics of the image;
and the coordinate calculation unit is used for calculating the identification of the identified marking pattern and the coordinates of the four vertexes of the corresponding quadrangle.
4. The automatic positioning method of the mobile robot is characterized by comprising the following steps:
s01, acquiring images of a plurality of marks which are arranged at different positions in a current fixed operation scene in advance, wherein each mark is arranged in the field of view of a look-around camera in advance, and different preset feature code patterns are respectively arranged on each mark in more than one view angle direction;
s02, receiving images of marks acquired by all cameras in the looking-around camera, identifying characteristic code patterns on all the marks, and calculating the current pose output of the mobile robot according to the identified characteristic code patterns on all the marks;
the step S02 includes:
s201, receiving images of marks acquired by all cameras in the looking-around camera, identifying the feature code patterns on all the marks, and outputting the identified feature patterns;
s202, calculating pose information of each identified pattern relative to a corresponding camera according to the identified characteristic images;
s203, calculating the current pose of the mobile robot according to pose information of each pattern relative to the corresponding camera;
the step S202 includes:
s221, calculating a homography matrix according to the mark of the mark pattern and coordinates of four vertexes of the corresponding quadrangle recognized by the pattern recognition module;
s222, calculating pose information of each pattern relative to a corresponding camera according to camera internal parameters and the homography matrix;
in step S221, the homography matrix is calculated specifically as follows;
wherein ,matrix of quadrilateral vertex coordinates of Mj-th pattern of tk moment relative to Ci-th camera, +.>Coordinates of four vertices of a quadrilateral, +.>Normalized position matrix of four vertexes of Mj-th pattern in pattern coordinate system;
in the step S222, the euclidean space conversion matrix containing pose information of the mtj-th pattern at the tk moment relative to the Ci-th camera is calculated according to the following methodTo obtain pose information of each pattern relative to the corresponding camera;
wherein s is a scale factor,i=1, I represents the number of cameras in the looking-around camera, j=1, J represents the number of marks that can be detected by the looking-around camera, and a represents the pose extraction matrix.
5. The automatic mobile robot positioning method according to claim 4, wherein the step S203 calculates the current pose of the mobile robot according to the following formula:
wherein ,euclidean space transformation matrix containing pose information for Mj-th pattern relative to Ci-th camera at tk momentFor Euclidean space conversion matrix containing Ci camera pose information under tk moment world coordinate system, < ->For Euclidean space conversion matrix containing mobile robot v pose information under tk moment world coordinate system, < ->Is the Euclidean space transformation matrix between the main camera C1 and the mobile robot v, +.>For the pose matrix of the other cameras Ci relative to the main camera C1 +.>Under world coordinate system for pattern MjEuclidean space conversion matrix containing pose information, < ->For the TK moment noise covariance matrix, J represents the number of marks that can be detected by the look-around camera and TK represents the moment.
6. An unmanned mower positioning system comprising the automatic positioning device of any one of claims 1-3, wherein the mobile robot is an unmanned mower on which the looking-around camera is mounted; or the unmanned mower positioning system comprises a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for executing the computer program to execute the automatic positioning method as claimed in any one of claims 4 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310773857.6A CN116499456B (en) | 2023-06-28 | 2023-06-28 | Automatic positioning device and method for mobile robot and positioning system for unmanned mower |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310773857.6A CN116499456B (en) | 2023-06-28 | 2023-06-28 | Automatic positioning device and method for mobile robot and positioning system for unmanned mower |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116499456A CN116499456A (en) | 2023-07-28 |
CN116499456B true CN116499456B (en) | 2023-09-05 |
Family
ID=87328837
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310773857.6A Active CN116499456B (en) | 2023-06-28 | 2023-06-28 | Automatic positioning device and method for mobile robot and positioning system for unmanned mower |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116499456B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102135429A (en) * | 2010-12-29 | 2011-07-27 | 东南大学 | Robot indoor positioning and navigating method based on vision |
CN107292818A (en) * | 2017-07-08 | 2017-10-24 | 上海交通大学 | It is a kind of based on the thread capturing device automatic station-keeping system and method for looking around camera |
CN107689061A (en) * | 2017-07-11 | 2018-02-13 | 西北工业大学 | Rule schema shape code and localization method for indoor mobile robot positioning |
CN108827316A (en) * | 2018-08-20 | 2018-11-16 | 南京理工大学 | Mobile robot visual orientation method based on improved Apriltag label |
WO2019080229A1 (en) * | 2017-10-25 | 2019-05-02 | 南京阿凡达机器人科技有限公司 | Chess piece positioning method and system based on machine vision, storage medium, and robot |
CN111047531A (en) * | 2019-12-02 | 2020-04-21 | 长安大学 | Monocular vision-based storage robot indoor positioning method |
CN112833883A (en) * | 2020-12-31 | 2021-05-25 | 杭州普锐视科技有限公司 | Indoor mobile robot positioning method based on multiple cameras |
CN115585810A (en) * | 2022-09-28 | 2023-01-10 | 南京航空航天大学 | Unmanned vehicle positioning method and device based on indoor global vision |
CN116203578A (en) * | 2022-12-06 | 2023-06-02 | 湖南大学 | Visual marker map pose acquisition method, robot positioning method and system |
CN116222558A (en) * | 2023-04-04 | 2023-06-06 | 东风汽车集团股份有限公司 | Positioning method, device and system based on vehicle-mounted information |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8174568B2 (en) * | 2006-12-01 | 2012-05-08 | Sri International | Unified framework for precise vision-aided navigation |
-
2023
- 2023-06-28 CN CN202310773857.6A patent/CN116499456B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102135429A (en) * | 2010-12-29 | 2011-07-27 | 东南大学 | Robot indoor positioning and navigating method based on vision |
CN107292818A (en) * | 2017-07-08 | 2017-10-24 | 上海交通大学 | It is a kind of based on the thread capturing device automatic station-keeping system and method for looking around camera |
CN107689061A (en) * | 2017-07-11 | 2018-02-13 | 西北工业大学 | Rule schema shape code and localization method for indoor mobile robot positioning |
WO2019080229A1 (en) * | 2017-10-25 | 2019-05-02 | 南京阿凡达机器人科技有限公司 | Chess piece positioning method and system based on machine vision, storage medium, and robot |
CN108827316A (en) * | 2018-08-20 | 2018-11-16 | 南京理工大学 | Mobile robot visual orientation method based on improved Apriltag label |
CN111047531A (en) * | 2019-12-02 | 2020-04-21 | 长安大学 | Monocular vision-based storage robot indoor positioning method |
CN112833883A (en) * | 2020-12-31 | 2021-05-25 | 杭州普锐视科技有限公司 | Indoor mobile robot positioning method based on multiple cameras |
CN115585810A (en) * | 2022-09-28 | 2023-01-10 | 南京航空航天大学 | Unmanned vehicle positioning method and device based on indoor global vision |
CN116203578A (en) * | 2022-12-06 | 2023-06-02 | 湖南大学 | Visual marker map pose acquisition method, robot positioning method and system |
CN116222558A (en) * | 2023-04-04 | 2023-06-06 | 东风汽车集团股份有限公司 | Positioning method, device and system based on vehicle-mounted information |
Non-Patent Citations (1)
Title |
---|
视觉测量中环形编码标志点的精确识别算法研究;倪章松;成垒;顾艺;陈然;刘甍;钟凯;李中伟;;新技术新工艺(第12期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116499456A (en) | 2023-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110070615B (en) | Multi-camera cooperation-based panoramic vision SLAM method | |
CN110570477B (en) | Method, device and storage medium for calibrating relative attitude of camera and rotating shaft | |
US20180190014A1 (en) | Collaborative multi sensor system for site exploitation | |
CN108052103B (en) | Underground space simultaneous positioning and map construction method of inspection robot based on depth inertia odometer | |
CN106898022A (en) | A kind of hand-held quick three-dimensional scanning system and method | |
Tamas et al. | Targetless calibration of a lidar-perspective camera pair | |
CN111275015A (en) | Unmanned aerial vehicle-based power line inspection electric tower detection and identification method and system | |
JP6479296B2 (en) | Position / orientation estimation apparatus and position / orientation estimation method | |
CN108481327A (en) | A kind of positioning device, localization method and the robot of enhancing vision | |
WO2021185215A1 (en) | Multi-camera co-calibration method in 3d modeling | |
Núnez et al. | Data Fusion Calibration for a 3D Laser Range Finder and a Camera using Inertial Data. | |
CN208289901U (en) | A kind of positioning device and robot enhancing vision | |
Bazin et al. | UAV attitude estimation by vanishing points in catadioptric images | |
Park et al. | Global map generation using LiDAR and stereo camera for initial positioning of mobile robot | |
CN114429435A (en) | Wide-field-of-view range target searching device, system and method in degraded visual environment | |
CN116499456B (en) | Automatic positioning device and method for mobile robot and positioning system for unmanned mower | |
CN116957360A (en) | Space observation and reconstruction method and system based on unmanned aerial vehicle | |
Megalingam et al. | Adding intelligence to the robotic coconut tree climber | |
CN115588036A (en) | Image acquisition method and device and robot | |
Roy et al. | Robotic surveying of apple orchards | |
Aliakbarpour et al. | Human silhouette volume reconstruction using a gravity-based virtual camera network | |
Song | Research on Intelligent Logistics Sorting Robot Control Based on Machine Vision | |
CN113011212A (en) | Image recognition method and device and vehicle | |
Bazin et al. | UAV attitude estimation by combining horizon-based and homography-based approaches for catadioptric images | |
CN118050008B (en) | Robot navigation system and navigation method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |