Nothing Special   »   [go: up one dir, main page]

CN111738085A - System construction method and device for realizing automatic driving and simultaneously positioning and mapping - Google Patents

System construction method and device for realizing automatic driving and simultaneously positioning and mapping Download PDF

Info

Publication number
CN111738085A
CN111738085A CN202010441208.2A CN202010441208A CN111738085A CN 111738085 A CN111738085 A CN 111738085A CN 202010441208 A CN202010441208 A CN 202010441208A CN 111738085 A CN111738085 A CN 111738085A
Authority
CN
China
Prior art keywords
mapping
automatic driving
algorithm
map
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010441208.2A
Other languages
Chinese (zh)
Other versions
CN111738085B (en
Inventor
兰凤崇
李继文
陈吉清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202010441208.2A priority Critical patent/CN111738085B/en
Publication of CN111738085A publication Critical patent/CN111738085A/en
Application granted granted Critical
Publication of CN111738085B publication Critical patent/CN111738085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document
    • G06V30/422Technical drawings; Geographical maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a system construction method and a device for realizing automatic driving and simultaneously positioning and mapping, wherein the method comprises the following steps: after image data are obtained, detecting a dynamic object in the image data by adopting the target detection thread; extracting feature points of the image data by using a feature extraction algorithm, wherein the feature extraction algorithm and the target detection thread adopt a parallel computing mode; according to the detection result of the dynamic object and the extraction result of the feature points, obtaining and eliminating the feature points containing the dynamic object, and according to a preset matching algorithm, completing matching without the feature points of the dynamic object; and matching and acquiring the initial pose of the camera according to the feature points of the adjacent frames, and optimizing the pose of the camera by tracking a local map to finish the positioning of the camera. The method has high tracking track precision and high calculation speed, meets the requirement of positioning and mapping the automatic driving vehicle in a dynamic environment, and can be widely applied to the field of automatic driving.

Description

System construction method and device for realizing automatic driving and simultaneously positioning and mapping
Technical Field
The invention relates to the field of automatic driving, in particular to a system construction method and a device for realizing automatic driving and simultaneously positioning and drawing.
Background
Simultaneously, positioning and mapping (SLAM) is one of the core problems in the automatic driving research technology, and the SLAM with high robustness, high precision and real-time performance has important application value in the fields of automatic driving and the like.
At present, scholars at home and abroad mainly research the SLAM technology in the field of automatic driving on the basis of a static environment and have low calculation efficiency. At present, some systems can provide real scale information for monocular vision SLAM maps, reduce the influence of unstable factors in the visual positioning process, and improve the stability of the systems under the conditions of visual feature loss and the like. However, in a dynamic scene, the tracking precision of the system is extremely low, and even the tracking fails; meanwhile, due to low calculation efficiency of the system, the system cannot operate when the vehicle speed is high.
The problem that how to realize real-time operation under the condition of limited resources is always difficult to solve is limited by the performance and power consumption of a computing platform. Meanwhile, the conventional SLAM is built in a static environment, and the motion of an environmental object is not considered, but in an actual environment, the dynamic change of the environment is caused by the walking of people and the coming and going of vehicles, so that the map built by the SLAM system cannot keep the consistency for a long time, and the vision-based characteristic becomes unstable due to the motion of the object. Therefore, how to construct a SLAM with high robustness, high precision and real-time property under the condition of limited resources is a problem which needs to be solved urgently in the field of intelligent driving.
The noun explains:
DG-SLAM: SLAM Based on Deep learning and GPU Parallel Computing, namely the abbreviation of SLAM Based on Deep learning and GPU Parallel Computing.
Disclosure of Invention
In order to solve one of the above technical problems, an object of the present invention is to provide a system construction method and apparatus for implementing automatic driving while positioning and mapping.
The technical scheme adopted by the invention is as follows:
a system construction method for realizing automatic driving and simultaneously positioning and mapping comprises three parallel threads, namely a target detection thread, a tracking thread and a local mapping thread, wherein the tracking thread comprises the following steps:
after image data are obtained, detecting a dynamic object in the image data by adopting the target detection thread;
extracting feature points of the image data by using a feature extraction algorithm, wherein the feature extraction algorithm and the target detection thread adopt a parallel computing mode;
according to the detection result of the dynamic object and the extraction result of the feature points, obtaining and eliminating the feature points containing the dynamic object, and according to a preset matching algorithm, completing matching without the feature points of the dynamic object;
acquiring an initial pose of a camera according to feature point matching of adjacent frames, and optimizing the pose of the camera by tracking a local map to finish positioning of the camera;
in the process of tracking the local map, a saturation kernel function is adopted to calculate the two-norm square sum of the minimized error terms of the projection matching of the feature points.
Further, the formula of the quadratic sum of two norms of the minimized error terms for calculating the projection matching of the feature points by adopting the saturation kernel function is as follows:
Figure BDA0002504089780000021
wherein f isi(x) Is the residual error between the coordinate of the point set P in the pixel coordinate system and the coordinate of the current frame characteristic point, S is a robust saturation kernel function, and is written with | | | fi(x)||2E, then:
Figure BDA0002504089780000022
in the formula, the residual error is defined as a threshold value.
Further, the optimizing the pose of the camera by tracking the local map includes:
calculating the space of a current frame taking the camera as the center according to the initial pose;
screening map points in the space by adopting three standards in parallel, and calculating to obtain a point set P which is in projection matching with the current frame;
and the point set P is projected to a pixel coordinate system from a world coordinate system to be matched with the characteristic points of the current frame in a projection manner, and the least square optimization is carried out by adopting a light beam adjustment method.
Further, the screening of map points in the space by using three criteria in parallel includes:
based on a first standard, screening out a local map point set S positioned in the space;
based on a second standard, calculating an included angle between a current view ray v and the average view direction n of the map cloud points, and if v & n is smaller than cos (60 degrees), discarding the map cloud points to obtain a point set T;
and based on a third standard, calculating the distance d from the map cloud point to the center of the camera, and if the distance d is not in the scale-invariant interval of the map cloud point, discarding the map cloud point to obtain a point set W.
Further, the target detection thread comprises a target detection model, the target detection model is an algorithm based on deep learning, and the target detection model based on deep learning is obtained by learning a target object through the algorithm.
Further, the algorithm based on deep learning is an SSD algorithm, the algorithm takes VGG-16 as a basic network, based on a feedforward convolutional neural network, detects on feature maps of different scales respectively, and then eliminates redundant repeated frames by using a non-maximum suppression algorithm to obtain a final detection frame.
Further, the target detection model is obtained by the Tensorflow deep learning framework training.
Further, the local mapping process is established based on ORB-SLAM.
Further, the threshold value is 35.89.
The other technical scheme adopted by the invention is as follows:
a system construction device for realizing automatic driving and simultaneously positioning and constructing a map comprises:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The invention has the beneficial effects that: the method has higher track tracking precision and higher calculation speed, and meets the requirements of positioning and drawing the automatic driving vehicle in a dynamic environment.
Drawings
Fig. 1 is an overall flowchart of a DG-SLAM system construction method for implementing efficient and simultaneous positioning and mapping of autonomous driving according to an embodiment of the present invention.
FIG. 2 is a flowchart of an embodiment of the present invention based on the SSD model;
FIG. 3 is a flow chart of a GPU-based parallel computing model according to an embodiment of the present invention;
FIG. 4 is a diagram of the detection effect of the SSD algorithm on KITTI data sets in accordance with an embodiment of the present invention;
FIG. 5 is a diagram showing the comparison effect of the testing precision of the system constructed by the method of the embodiment of the present invention and the ORB-SLAM system on a KITTI data set;
FIG. 6 is a comparison graph of the real-time performance of the KITTI data set test by the system constructed by the method of the embodiment of the present invention and the ORB-SLAM system.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
Referring to fig. 1, the embodiment provides a DG-SLAM system construction method for realizing automatic driving, efficient and simultaneous positioning and mapping, the system is composed of three parallel threads of a target detection thread, a tracking thread and a local mapping, wherein the tracking thread includes but is not limited to the following steps:
the method comprises the following steps: in the tracking thread, a data association part detects a dynamic object by using a target detection algorithm model based on SSD with high detection speed and high detection precision, as shown in FIG. 2, the algorithm takes VGG-16 as a basic network, and detects on feature maps of different scales respectively based on a feedforward convolutional neural network, and then eliminates redundant repeated frames by using a non-maximum suppression algorithm to obtain a final detection frame. The algorithmic model was obtained by training the KITTI dataset using the tensrflow deep learning framework. In order to better test the effect of the algorithm, the KITTI data set is divided into four types of vehicle, pedestrian, cyclist and background. And obtaining a target detection model in the system by taking 6400 pictures as a training set, 800 pictures as a training verification set and 800 pictures as a test set in 8000 pictures with labels. In the system, image data is given firstly, dynamic objects (such as pedestrians and vehicles) are detected by using an SSD-based target detection algorithm model, the feature points of the image data are extracted by using a feature point extraction algorithm in parallel, then the dynamic target feature points are removed by combining a dynamic target detection result and a feature point extraction result, and finally the matching of the feature points is completed according to a matching algorithm of the feature points.
Step two: as shown in fig. 3, when selecting a tracking thread positioning map point, an initial pose is obtained according to the matched feature points, a space I of a current frame centered on a camera is calculated according to the initial pose, and map points are screened by using three criteria: checking1 (i.e., first standard): screening and selecting a local map point set S in the space I; checking2 (i.e., second standard): calculating an included angle between a current view ray v and the average view direction n of the map cloud points, and if v & n is less than cos (60 degrees), discarding the point to obtain a point set T; checking3 (i.e., third standard): and calculating the distance d between the map cloud point and the center of the camera, and if the distance d is not in the scale-invariant interval of the map cloud point, discarding the map cloud point to obtain a point set W. And parallelly projecting the key frame image to a local map, searching a key frame which is viewed together with the current frame, and adopting the formula:
S∩T∩W=P
calculating to obtain a local map point set P which is subjected to projection matching with the current frame; and (3) projecting the point set P from the world coordinate system to the pixel coordinate system to perform projection matching with the characteristic points of the current frame, and performing least square optimization by using a beam adjustment method.
Step three: the use of the saturrated kernel function in tracking pose optimization in threads acts to minimize the sum of the two-norm squares of the error terms:
Figure BDA0002504089780000051
in the formula (f)i(x) Is the residual error between the coordinate of the point set P in the pixel coordinate system and the coordinate of the current frame characteristic point, S is a robust saturation kernel function, and is written with | | | fi(x)||2E, then:
Figure BDA0002504089780000052
where the threshold is the residualThe value of this embodiment is 35.89, when the residual error is less than 35.89, the function grows to one-off, when the residual error exceeds 35.89, the function value is taken2Equivalently, the maximum value of the gradient is limited, so that abnormal points can be effectively processed, and the system is ensured to have higher robustness. In the embodiment, the optimization process is circulated for 4 times every time, the robust kernel is called in the first two times of optimization, so that the error value is prevented from being too diverged, and the more main reason is to inhibit the influence of wrong matching; the robust kernel is closed in the last two times, the influence of error matching is basically restrained by the optimization in the first two times, and each optimization iteration is carried out for 10 times; after each second optimization is finished, judging whether each feature point is an inner point or an outer point according to a preset error threshold; after the optimization is finished, all matched external points are deleted according to the judgment condition, and the number of internal points (the observation times are more than 0) is counted; if the number of the inner points is more than 10, the tracking is considered to be successful, otherwise, the tracking is failed. In the process of pose optimization, by minimizing the sum of the squared two-norm error terms as an objective function, to avoid the two-norm growing too fast when the error is large, a kernel function is needed to ensure that the error of each edge does not mask off the other edges. It is a particular practice to replace the two-norm measure of the original error with a function that grows less rapidly, while ensuring the smooth nature to facilitate derivation. Aiming at the defect that the nonlinear function of the common nonlinear Huber robust kernel function is not suitable for GPU parallel processing, the method utilizes the characteristic that the saturated linear function is a linear equation, and adapts to GPU parallel computing acceleration while not influencing the error optimization effect. In the embodiment, the result of each time of pose optimization in the tracking thread positioning part is used as the pose initial value, and the second step and the third step are continuously optimized to obtain the latest pose value until the optimization result meets the set threshold.
In the embodiment, a data association method based on deep learning is coupled in a data association algorithm, a target detection algorithm based on SSD is introduced to detect dynamic objects in the environment, and dynamic feature points are removed before feature point matching, so that the condition of mismatching of the feature points is reduced. The specific working flow of the embodiment is as follows: given image data input, firstly, detecting dynamic objects (such as pedestrians, vehicles and the like) by using a target detection algorithm of an SSD (solid State disk), wherein FIG. 4 is a detection effect graph of the algorithm on a KITTI data set, the algorithm and a feature point extraction algorithm adopt a parallel calculation mode, and feature points of the dynamic objects are removed by combining a dynamic object detection result and a feature point extraction result, so that the matching of the feature points is completed; then obtaining an initial pose of a camera by inter-frame matching, optimizing the pose of the camera by tracking a local map, mapping a current frame to a space area in the process of tracking the local map, and calculating whether a space point of the local map is in the visual field range of the current frame, wherein the step does not depend on 2D data of the current frame, and the selection of a feature point to be optimized can be completed by constructing a parallel calculation model; in order to avoid that the two-norm grows too fast when the error is large, a kernel function is needed to ensure that the error of each edge does not cover other edges, so after the selection and the matching of the feature points are completed, the embodiment uses the kernel function to act on the two-norm square sum of the minimized error term to calculate the re-projection error of the feature points in parallel, and completes the positioning of the pose of the camera.
In the embodiment, the test is performed on the KITTI 01, and meanwhile, compared with an ORB-SLAM system, as shown in FIG. 5, RMSE (Rootmean Square Error) is the root mean Square Error of the system test, and is very suitable for evaluating the accuracy performance of the SLAM system. As shown in FIG. 6, ORB-SLAM runs at an average frame rate of about 25 frames per second, and the run time is unstable; the operation frame rate of the SLAM algorithm of the embodiment averagely exceeds 125 frames per second, is more than 5 times faster than that of ORB-SLAM, and the operation time is stable, so that the embodiment realizes SLAM with high robustness, high precision and real-time performance.
The embodiment further provides a system construction device for realizing automatic driving and simultaneously positioning and mapping, which comprises:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The system construction device for realizing automatic driving and simultaneous positioning and mapping can execute the DG-SLAM system construction method for realizing automatic driving and efficient simultaneous positioning and mapping provided by the embodiment of the method, can execute any combination implementation steps of the embodiment of the method, and has corresponding functions and beneficial effects of the method.
It will be understood that all or some of the steps, systems of methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (10)

1. A system construction method for realizing automatic driving and simultaneously positioning and mapping is characterized in that the system comprises three parallel threads, namely a target detection thread, a tracking thread and a local mapping thread, and the tracking thread comprises the following steps:
after image data are obtained, detecting a dynamic object in the image data by adopting the target detection thread;
extracting feature points of the image data by using a feature extraction algorithm, wherein the feature extraction algorithm and the target detection thread adopt a parallel computing mode;
according to the detection result of the dynamic object and the extraction result of the feature points, obtaining and eliminating the feature points containing the dynamic object, and according to a preset matching algorithm, completing matching without the feature points of the dynamic object;
acquiring an initial pose of a camera according to feature point matching of adjacent frames, and optimizing the pose of the camera by tracking a local map to finish positioning of the camera;
in the process of tracking the local map, a saturation kernel function is adopted to calculate the two-norm square sum of the minimized error terms of the projection matching of the feature points.
2. The method for constructing a system for realizing automatic driving and simultaneously positioning and mapping as claimed in claim 1, wherein the formula for calculating the quadratic sum of two norms of the minimized error terms of the feature point projection matching by using the saturation kernel function is as follows:
Figure FDA0002504089770000011
wherein f isi(x) Is the residual error between the coordinate of the point set P in the pixel coordinate system and the coordinate of the current frame characteristic point, S is a robust saturation kernel function, and is written with | | | fi(x)||2E, then:
Figure FDA0002504089770000012
in the formula, the residual error is defined as a threshold value.
3. The system construction method for realizing automatic driving simultaneous localization and mapping according to claim 2, wherein the optimizing the pose of the camera by tracking the local map comprises:
calculating the space of a current frame taking the camera as the center according to the initial pose;
screening map points in the space by adopting three standards in parallel, and calculating to obtain a point set P which is in projection matching with the current frame;
and the point set P is projected to a pixel coordinate system from a world coordinate system to be matched with the characteristic points of the current frame in a projection manner, and the least square optimization is carried out by adopting a light beam adjustment method.
4. The method for constructing the system for realizing automatic driving and simultaneously positioning and mapping as claimed in claim 3, wherein the screening of the map points in the space by using three criteria in parallel comprises:
based on a first standard, screening out a local map point set S positioned in the space;
based on a second standard, calculating an included angle between a current view ray v and the average view direction n of the map cloud points, and if v & n is smaller than cos (60 degrees), discarding the map cloud points to obtain a point set T;
and based on a third standard, calculating the distance d from the map cloud point to the center of the camera, and if the distance d is not in the scale-invariant interval of the map cloud point, discarding the map cloud point to obtain a point set W.
5. The method as claimed in claim 1, wherein the target detection thread includes a target detection model, the target detection model is a deep learning-based algorithm, and the target detection model is obtained by learning a target object through the deep learning-based algorithm.
6. The method as claimed in claim 5, wherein the algorithm based on deep learning is SSD algorithm, the algorithm uses VGG-16 as a basic network, and based on feedforward convolutional neural network, the detection is performed on feature maps of different scales, and then the non-maximum suppression algorithm is used to eliminate redundant repeated frames to obtain the final detection frame.
7. The system construction method for realizing automatic driving and simultaneous positioning and mapping as claimed in claim 5, wherein the object detection model is obtained by Tensorflow deep learning framework training.
8. The system construction method for realizing automatic driving simultaneous localization and mapping according to claim 1, wherein the local mapping process is established based on ORB-SLAM.
9. The system construction method for realizing automatic driving and simultaneous localization and mapping according to claim 2, wherein the threshold value is 35.89.
10. A system construction device for realizing automatic driving and simultaneously positioning and drawing is characterized by comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a system construction method for automated driving while positioning and mapping as recited in any of claims 1-9.
CN202010441208.2A 2020-05-22 2020-05-22 System construction method and device for realizing automatic driving simultaneous positioning and mapping Active CN111738085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010441208.2A CN111738085B (en) 2020-05-22 2020-05-22 System construction method and device for realizing automatic driving simultaneous positioning and mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010441208.2A CN111738085B (en) 2020-05-22 2020-05-22 System construction method and device for realizing automatic driving simultaneous positioning and mapping

Publications (2)

Publication Number Publication Date
CN111738085A true CN111738085A (en) 2020-10-02
CN111738085B CN111738085B (en) 2023-10-24

Family

ID=72647642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010441208.2A Active CN111738085B (en) 2020-05-22 2020-05-22 System construction method and device for realizing automatic driving simultaneous positioning and mapping

Country Status (1)

Country Link
CN (1) CN111738085B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283199A (en) * 2021-12-29 2022-04-05 北京航空航天大学 Dynamic scene-oriented dotted line fusion semantic SLAM method
WO2022089577A1 (en) * 2020-10-31 2022-05-05 华为技术有限公司 Pose determination method and related device thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109631855A (en) * 2019-01-25 2019-04-16 西安电子科技大学 High-precision vehicle positioning method based on ORB-SLAM
CN109781092A (en) * 2019-01-19 2019-05-21 北京化工大学 Localization for Mobile Robot and drawing method is built in a kind of danger chemical accident
US20190178654A1 (en) * 2016-08-04 2019-06-13 Reification Inc. Methods for simultaneous localization and mapping (slam) and related apparatus and systems
CN109934862A (en) * 2019-02-22 2019-06-25 上海大学 A kind of binocular vision SLAM method that dotted line feature combines
CN110378345A (en) * 2019-06-04 2019-10-25 广东工业大学 Dynamic scene SLAM method based on YOLACT example parted pattern

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190178654A1 (en) * 2016-08-04 2019-06-13 Reification Inc. Methods for simultaneous localization and mapping (slam) and related apparatus and systems
CN109781092A (en) * 2019-01-19 2019-05-21 北京化工大学 Localization for Mobile Robot and drawing method is built in a kind of danger chemical accident
CN109631855A (en) * 2019-01-25 2019-04-16 西安电子科技大学 High-precision vehicle positioning method based on ORB-SLAM
CN109934862A (en) * 2019-02-22 2019-06-25 上海大学 A kind of binocular vision SLAM method that dotted line feature combines
CN110378345A (en) * 2019-06-04 2019-10-25 广东工业大学 Dynamic scene SLAM method based on YOLACT example parted pattern

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
RAÚL MUR-ARTAL ET AL.: "ORB-SLAM: A Versatile and Accurate Monocular SLAM System", vol. 31, no. 5, pages 1152 - 1153 *
张威: "基于物体语义信息的室内视觉SLAM研究", vol. 2019, no. 2019, pages 45 *
禹鑫燚 等: "SLAM过程中的机器人位姿估计优化算法研究", vol. 28, no. 8, pages 713 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022089577A1 (en) * 2020-10-31 2022-05-05 华为技术有限公司 Pose determination method and related device thereof
CN114445490A (en) * 2020-10-31 2022-05-06 华为技术有限公司 Pose determination method and related equipment thereof
CN114283199A (en) * 2021-12-29 2022-04-05 北京航空航天大学 Dynamic scene-oriented dotted line fusion semantic SLAM method
CN114283199B (en) * 2021-12-29 2024-06-11 北京航空航天大学 Dynamic scene-oriented dotted line fusion semantic SLAM method

Also Published As

Publication number Publication date
CN111738085B (en) 2023-10-24

Similar Documents

Publication Publication Date Title
US11003939B2 (en) Information processing apparatus, information processing method, and storage medium
CN107886048A (en) Method for tracking target and system, storage medium and electric terminal
CN112509044A (en) Binocular vision SLAM method based on dotted line feature fusion
CN108197604A (en) Fast face positioning and tracing method based on embedded device
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
CN113313763A (en) Monocular camera pose optimization method and device based on neural network
CN110378942A (en) Barrier identification method, system, equipment and storage medium based on binocular camera
CN115797736A (en) Method, device, equipment and medium for training target detection model and target detection
CN112861870B (en) Pointer instrument image correction method, system and storage medium
CN111738085A (en) System construction method and device for realizing automatic driving and simultaneously positioning and mapping
CN108596032B (en) Detection method, device, equipment and medium for fighting behavior in video
US11354923B2 (en) Human body recognition method and apparatus, and storage medium
CN114972490B (en) Automatic data labeling method, device, equipment and storage medium
CN115049731A (en) Visual mapping and positioning method based on binocular camera
US11699303B2 (en) System and method of acquiring coordinates of pupil center point
CN115077563A (en) Vehicle positioning accuracy evaluation method and device and electronic equipment
CN114648639B (en) Target vehicle detection method, system and device
CN116091784A (en) Target tracking method, device and storage medium
Tombari et al. Stereo for robots: quantitative evaluation of efficient and low-memory dense stereo algorithms
CN111462321B (en) Point cloud map processing method, processing device, electronic device and vehicle
CN111765892B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN114549825A (en) Target detection method and device, electronic equipment and storage medium
CN114612544A (en) Image processing method, device, equipment and storage medium
CN114283081A (en) Depth recovery method based on pyramid acceleration, electronic device and storage medium
CN115752489B (en) Positioning method and device of movable equipment and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant