WO2020056874A1 - Système de stationnement automatique et procédé basé sur la reconnaissance visuelle - Google Patents
Système de stationnement automatique et procédé basé sur la reconnaissance visuelle Download PDFInfo
- Publication number
- WO2020056874A1 WO2020056874A1 PCT/CN2018/113658 CN2018113658W WO2020056874A1 WO 2020056874 A1 WO2020056874 A1 WO 2020056874A1 CN 2018113658 W CN2018113658 W CN 2018113658W WO 2020056874 A1 WO2020056874 A1 WO 2020056874A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- parking
- vehicle
- mapping
- map
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000000007 visual effect Effects 0.000 title claims abstract description 28
- 238000013507 mapping Methods 0.000 claims abstract description 32
- 238000013135 deep learning Methods 0.000 claims abstract description 14
- 238000012937 correction Methods 0.000 claims description 10
- 238000005457 optimization Methods 0.000 claims description 8
- 238000002604 ultrasonography Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 4
- 230000002159 abnormal effect Effects 0.000 abstract 1
- 230000004807 localization Effects 0.000 abstract 1
- 230000009466 transformation Effects 0.000 description 5
- 240000004050 Pentaglottis sempervirens Species 0.000 description 4
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/27—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/06—Automatic manoeuvring for parking
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
Definitions
- the present application belongs to the field of intelligent driving, and in particular relates to an automatic parking system based on visual recognition
- the main technical routes for automatic parking technology are based on traditional path planning algorithms, such as RRT, PRM, A * and so on.
- the basic idea is to identify the approximate location of the parking space by using ultrasound, and randomly generate paths, and then perform collision detection on the randomly generated paths, that is, to detect whether the path will pass through obstacles or whether the path is within the vehicle's driveable area.
- Dijkstra's algorithm and other methods are used to select the optimal parking route.
- An automatic parking system based on visual recognition characterized in that the system includes a mapping and positioning sub-module and a planning control sub-module;
- the mapping and positioning sub-module obtains images of the periphery of the vehicle by using a camera disposed on the vehicle; wherein the images of the periphery of the vehicle are images stitched from the images obtained by each of the cameras; and the camera is a fisheye wide-angle
- the camera's distortion correction formula is:
- x cor x + x (k 1 r 2 + k 2 r 4 + k 3 r 6 ) + [2p 1 y + p 2 (r 2 + 2x 2 )], (1)
- (x, y) is the original coordinate of a pixel in the image;
- (x cor , y cor ) is the coordinate after the pixel is corrected for distortion;
- [p 1 , p 2 ] is the tangential distortion parameter;
- the mapping and positioning sub-module identifies a parking space point, a parking space line and / or a guide line from the image, and establishes a map;
- the planning control sub-module uses the Reeds-Shepp curve to generate a smooth path for the map; and controls the vehicle to complete tracking the planned path through a proportional-integral-derivative (PID) control algorithm, The vehicle moves to the parking target.
- PID proportional-integral-derivative
- identifying the parking space, parking space line and / or guide line in the image is implemented by a deep learning algorithm.
- the map is obtained by using an optimization algorithm through input of the parking space point, the parking space line, and / or the guide line information.
- an automatic parking system based on visual recognition which is characterized in that: the system includes a mapping and positioning sub-module and a planning control sub-module;
- the mapping and positioning sub-module obtains images of the surroundings of the vehicle by using a camera disposed on the vehicle;
- the mapping and positioning sub-module identifies a parking space point, a parking space line and / or a guide line from the image, and establishes a map;
- the planning control sub-module uses the Reeds-Shepp curve to generate a smooth path for the map; and controls the vehicle to complete tracking the planned path through a proportional-integral-derivative (PID) control algorithm, The vehicle moves to the parking target point;
- PID proportional-integral-derivative
- the mapping and positioning sub-module obtains local obstacle information by using ultrasonic waves provided on the vehicle; the mapping and positioning sub-module identifies a parking point, a parking line and / or a guide line from the image, and combines the local Obstacle information map.
- a visual recognition-based automatic parking method includes the following steps: a mapping and positioning step and a planning control step;
- the mapping and positioning step obtains an image of the surroundings of the vehicle by using a camera disposed on the vehicle; identifying a parking point, a parking line, and / or a guide line from the image to establish a map;
- the planning control step uses the Reeds-Shepp curve to generate a smooth path for the map; and controls the vehicle to complete tracking of the planned path through a proportional-integral-derivative (PID) control algorithm to move the vehicle Move to the parking target point;
- PID proportional-integral-derivative
- the Reeds-Shepp curve used to generate a smooth path refers to the trajectory planned from the current position to the parking position using the Reeds-Shepp curve.
- the parking space points, parking space lines and / or guide lines in the recognition image are implemented by a deep learning algorithm.
- the map is obtained by using an optimization algorithm through input of the parking space point, the parking space line, and / or the guide line information.
- the image around the vehicle is an image obtained by splicing the images obtained by the cameras.
- the camera is a fisheye wide-angle camera, and its distortion correction formula is:
- x cor x + x (k 1 r 2 + k 2 r 4 + k 3 r 6 ) + [2p 1 y + p 2 (r 2 + 2x 2 )], (1)
- an automatic parking method based on visual recognition which is characterized in that the method includes the following steps: a mapping and positioning step and a planning control step;
- the mapping and positioning step obtains an image of the surroundings of the vehicle by using a camera disposed on the vehicle; identifying a parking point, a parking line, and / or a guide line from the image to establish a map;
- the planning control step uses the Reeds-Shepp curve to generate a smooth path for the map; and controls the vehicle to complete tracking of the planned path through a proportional-integral-derivative (PID) control algorithm to move the vehicle Move to the parking target point;
- PID proportional-integral-derivative
- the local obstacle information is obtained by using the ultrasound set on the vehicle; the mapping and positioning sub-module identifies a parking spot, a parking line and / or a guide line from the image, and combines it with Map local obstacle information.
- the invention of the present invention lies in the following aspects, but is not limited to the following aspects:
- the map is an information map incorporating local obstacles.
- the information map incorporating local obstacles provides information guarantee for parking and improves the efficiency of parking planning.
- the use of non-visual sensors, such as ultrasonic sensors, has a clear division of labor with visual sensors for the detected information. Because in theory vision sensors are capable of detecting local obstacles and information maps, but if only vision sensors are used, it will bring greater computational pressure and burden during the later artificial neural network calculations, affecting vehicle parking. Controlled speed. But these effects will not appear in traditional vehicle control, because the traditional auxiliary vehicle control and planning rely on neural networks is very low. However, precisely because the present invention is different from sensor-assisted vehicle control and planning, it uses Reeds-Shepp curves and neural networks to identify parking spots, parking lines, and / or guide lines.
- the present invention requires Which information to choose, the Reeds-Shepp curve used in the present invention is specially designed for the identified parking space points, parking space lines and / or guidance lines.
- the identification of parking space points, parking space lines and / or guidance lines is effective, but The visual sensor method that forms the curve cannot be simply transferred to the obstacle recognition. In actual experiments, the recognition effect of the obstacle is poor, and the ultrasonic sensor for the obstacle is better than the visual sensor.
- the use of ultrasonic sensors instead of vision sensors in the local obstacle information section can effectively reduce the pressure of later data processing, while also exerting the advantages of the ultrasonic sensors for detecting obstacle information in the visual field. Even if the ultrasonic sensor is used to selectively detect only local obstacles, the other vision sensors are still subjected to rigorous experiments and detection, not a simple superposition of the two sensors. This is one of the inventive points of the present invention.
- the distortion correction formula adopted by the present invention takes into account the position of the fish-eye wide-angle camera on the data acquisition vehicle, and is different from the existing fish-eye image correction.
- the camera position is placed in the storage area when these parking space point, parking line, and guide line are close to the imaging center.
- FIG. 1 is a functional block diagram of an automatic parking provided by an embodiment of the present invention
- FIG. 2 is an example diagram of a Reeds-Shepp curve provided by an embodiment of the present invention.
- FIG. 3 is a flowchart of a Reeds-Shepp curve planning based on real-time environment information according to an embodiment of the present invention.
- FIG. 1 shows a functional block diagram of an implementation of an automatic parking system based on deep learning provided by an embodiment of the present invention, including a mapping and positioning sub-module and a planning control sub-module. The details are as follows:
- This module is mainly used for feature extraction and parking space positioning of the collected images to obtain obstacle maps and related parameter information.
- the module is divided into four steps: step one, first correct distortion and inverse perspective transformation of the images collected by four fisheye cameras located around the vehicle, and then stitch the surrounding image to obtain a complete ring view; step two, combine Mass-labeled look-around mosaics, using deep learning algorithms for parking space identification and visual feature extraction; step three, simultaneous location mapping (SLAM); step four, fusion of ultrasonic information to obtain obstacle maps, in order to provide later path planning and complete parking Map information.
- step one first correct distortion and inverse perspective transformation of the images collected by four fisheye cameras located around the vehicle, and then stitch the surrounding image to obtain a complete ring view
- step two combine Mass-labeled look-around mosaics, using deep learning algorithms for parking space identification and visual feature extraction
- step three simultaneous location mapping (SLAM)
- step four fusion of ultrasonic information to obtain obstacle maps, in order to provide later path planning and complete parking Map information.
- Step 1 Use four fisheye cameras located on the front, rear, left, and right of the vehicle to ensure that the images collected by the cameras cover a 360-degree area around the vehicle, and that the images collected by two adjacent cameras should have overlapping areas . Due to the large distortion of the images collected by the fisheye camera, the image distortion correction and inverse perspective transformation of the de-distortion need to be performed first, and the surround-view stitching algorithm is run to obtain a two-dimensional top-view surround-view mosaic.
- the camera in the present invention is a fish-eye wide-angle camera. Due to the large distortion of the images collected by the fisheye camera, the distortion of the collected image information must first be corrected.
- equation (1) can be used to correct the distortion of the four images collected:
- x cor x + x (k 1 r 2 + k 2 r 4 + k 3 r 6 ) + [2p 1 y + p 2 (r 2 + 2x 2 )], (1)
- (x, y) is the original coordinate of a pixel in the image; (x cor , y cor ) is the coordinate after the pixel is corrected for distortion; Is the radial distortion parameter; [p 1 , p 2 ] is the tangential distortion parameter.
- the selection of the distortion parameters here takes into consideration the needs of the fisheye camera to capture a 360-degree area around the vehicle, as well as the parking space points, parking space lines and / or guide lines. For example, fisheye cameras are located on the front, rear, left, and right sides of the vehicle to ensure 360-degree shooting without dead angles. In addition, parking spots, parking lines, and / or guide lines should be located at the center of the image as much as possible for clear imaging.
- the two parameters p 1 and p 2 of the tangential distortion are linearly superimposed, and are corrected in a 2p 1 y + p 2 (r 2 + 2x 2 ) manner.
- the inverse perspective transformations are performed on the four corrected distortion images respectively, that is, the position correspondence between points in the image coordinate system and points on a known plane in the three-dimensional world coordinate system is established.
- the geometric center point of the vehicle can vertically downward projection on the ground as the coordinate origin O w; Y w axis direction parallel to the direction of the rear axle, left side points of the vehicle is positive; X w Y w axis perpendicular, It is positive to point in front of the vehicle; it is positive to the Z w axis when it is perpendicular to the ground.
- This process includes: First, set the field of view of the ring view. That is to determine the zoom factor of the bird's-eye view;
- the first image in the mapping and positioning sub-module shown in FIG. 1 is a ring view generated by splicing.
- Step 2 Through the deep learning algorithm, the parking space points, parking space lines, guide lines and other information are identified.
- the labeling information includes parking space points, parking space lines, and guide lines.
- a supervised learning strategy is adopted, and a deep learning algorithm is used to design and learn parking space information identification network models.
- the layer network extracts distinguishable visual features and recognizes the parking space information in the ring view.
- the main reason for using deep learning algorithms to extract visual features such as parking space information is that deep convolutional neural networks not only have their unique local perception and parameter sharing advantages in processing images, but also use supervised learning network models with massively labeled data. Performance and robust performance are also big advantages.
- the input of this parking space recognition network model is a ring view
- the labeled model is used to supervise the network model to learn the visual features of the ring view regarding parking space points, parking space lines, and guide lines.
- the network output is a segmentation result map, specifically, and Input a mosaic image of the same resolution.
- the semantic attribute includes the attributes of the parking spot, the attributes of the parking line, and the attributes of the guide line.
- the information of the target parking space including the position of the parking space in the local map, the length, width, and angle of the parking space, is obtained using the parking space points and parking space lines identified by the parking space identification network model.
- the semantic information of each pixel is obtained by a neural network model, and the vector attributes of the parking space are extracted from the pixel positions of the parking point attributes and the pixel positions of the parking line attributes.
- the target parking position and Target parking course and parking space width, length, and angle are extracted from the pixel positions of the parking point attributes and the pixel positions of the parking line attributes.
- the target parking position and Target parking course and parking space width, length, and angle are determined whether there is an obstacle in each of the parking spaces visually recognized. If there are obstacles, it is determined that the parking space cannot be parked or is not empty.
- step three the visual information obtained from the deep learning algorithm is used as an input, and the optimization algorithm is used to obtain the vehicle pose and the local map established after the task is started.
- a Gauss-Newton optimization algorithm is used to obtain the parameter of the best matching position between the current real-time segmentation map and the local map as the pose result.
- Step four Obtain local obstacle information through the ultrasound information, and integrate the obstacle information into the map.
- the ultrasonic information is used to detect an empty parking space.
- the distance information between the side ultrasonic wave and the obstacle was detected in real time during the process of building the map, and the position of the obstacle in the local map was calculated by combining the pose and position.
- an obstacle information map is obtained, which includes the target parking space. If there are multiple parking spaces, all parking spaces will be displayed to the user in the form of a human-computer interaction interface, and the user will automatically select the target parking space.
- Path planning is the main strategy for solving automatic parking.
- the present invention adopts a path tracking method, generates a path in advance, and then uses a controller to perform path tracking.
- step one path planning, using the Reeds-Shepp curve to generate a smooth path for the obstacle information map
- step two control the vehicle through the proportional-integral-derivative (PID) control algorithm to complete the planning. Tracking of the track; step three, move the vehicle to the parking target point, and the parking task ends.
- PID proportional-integral-derivative
- Step 1 For the map fused with local obstacle information, according to the updated environmental information, adaptively call the Reeds-Shepp curve to generate candidate parking paths. This method is an invention point of the present invention.
- the principle of the automatic parking planning technology in the present invention is that during the parking process, as the vehicle gets closer to the parking target position, the surrounding environment information becomes more accurate and complete.
- the parking environment is updated, When there is a large difference in the previous parking environment, the surrounding environment information of the parking is updated, and a Reeds-Shepp curve is planned to track the current position to the parking position. This mechanism can ensure that Reeds-Shepp curve planning is called in real time to achieve accurate planning.
- the Reeds-Shepp curve can generate a trajectory from any starting pose (x 0 , y 0 , theta0) to any ending pose (x l , y l , thetal) according to the vehicle kinematics model.
- Reeds-Shepp curve is composed of several arcs or straight segments with a fixed radius, and the radius of the arc is generally the minimum turning radius of the car.
- the path length here refers to the length of the center trajectory of the rear axle of the car, which is the sum of the arc length of all arcs and the length of the straight line segment.
- Reeds-Shepp curve is a geometric planning method, which usually consists of the following basic types:
- C represents a circular arc trajectory
- represents a gear shift
- S represents a straight line segment
- ⁇ represents a turning arc specifying the segment trajectory; in some cases, a subscript of ⁇ / 2 will be given because the curve must follow the turning exactly When the radian is ⁇ / 2.
- Table 1 shows six kinds of motion primitives, which can construct all the best Reeds-Shepp curves.
- L and R represent left and right turn respectively; + and-represent forward and reverse gear respectively.
- Type 48 should be subdivided class removing C
- C is (L - R + L -) and (R - L + R -) categories, only 46 of the remaining classes.
- Step two through the above Reeds-Shepp curve generation method, a path planning strategy is obtained. Then the vehicle is controlled by the PID control algorithm to track the planned trajectory.
- controlling the driving of the vehicle will continuously update the parking environment around the vehicle as the vehicle travels, so it is necessary to track and update the planned parking trajectory in real time.
- an environmental difference threshold is first set, and the difference between the historical environment information and the real-time environment information is used to determine whether to update the parking trajectory. If the environmental difference is greater than the set threshold, that is, the surrounding parking environment has relatively obvious changes, the Reeds-Shepp curve planning should be performed on the newly acquired images; if the environmental difference is not large, that is, the surrounding parking environment is not relatively obvious Changes, the existing path planning is maintained.
- step three the vehicle follows the real-time planned trajectory to the parking target point, and the parking task ends.
- the traditional parking space recognition method is used to solve the problems of inaccurate identification of the parking space position by relying solely on ultrasound and a small number of scenes.
- scenes with data annotation can be covered, and the parking space recognition rate is over 95%, and the recognition error is less than 3 pixels.
- the Gauss-Newton optimization algorithm is used to process the visual segmentation results. The obtained pose information makes up for the problem of poor pose estimation accuracy caused by no visual feedback during trajectory tracking.
- modules or steps of the embodiments of the present invention described above may be implemented by a general-purpose computing device, and they may be centralized on a single computing device or distributed to multiple computing devices.
- they can be implemented with program code executable by a computing device, so that they can be stored in a storage device and executed by the computing device, and in some cases, can be different from here
- the steps shown or described are performed sequentially, or they are separately made into individual integrated circuit modules, or multiple modules or steps in them are made into a single integrated circuit module to implement. In this way, the embodiments of the present invention are not limited to any specific combination of hardware and software.
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Human Computer Interaction (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811079125.2A CN109720340B (zh) | 2018-09-17 | 2018-09-17 | 一种基于视觉识别的自动泊车系统及方法 |
CN201811079125.2 | 2018-09-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020056874A1 true WO2020056874A1 (fr) | 2020-03-26 |
Family
ID=66295691
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/113658 WO2020056874A1 (fr) | 2018-09-17 | 2018-11-02 | Système de stationnement automatique et procédé basé sur la reconnaissance visuelle |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109720340B (fr) |
WO (1) | WO2020056874A1 (fr) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111599217A (zh) * | 2020-06-04 | 2020-08-28 | 纵目科技(上海)股份有限公司 | 一种自主泊车系统架构、架构实现方法、终端和存储介质 |
CN111626348A (zh) * | 2020-05-20 | 2020-09-04 | 安徽江淮汽车集团股份有限公司 | 自动泊车测试模型构建方法、设备、存储介质及装置 |
CN111640062A (zh) * | 2020-05-15 | 2020-09-08 | 上海赫千电子科技有限公司 | 一种车载环视图像的自动拼接方法 |
CN111723659A (zh) * | 2020-05-14 | 2020-09-29 | 上海欧菲智能车联科技有限公司 | 泊车位确定方法、装置、计算机设备和存储介质 |
CN111753639A (zh) * | 2020-05-06 | 2020-10-09 | 上海欧菲智能车联科技有限公司 | 感知地图生成方法、装置、计算机设备和存储介质 |
CN111860228A (zh) * | 2020-06-30 | 2020-10-30 | 北京百度网讯科技有限公司 | 用于自主泊车的方法、装置、设备以及存储介质 |
CN112180373A (zh) * | 2020-09-18 | 2021-01-05 | 纵目科技(上海)股份有限公司 | 一种多传感器融合的智能泊车系统和方法 |
CN112880696A (zh) * | 2021-01-13 | 2021-06-01 | 成都朴为科技有限公司 | 一种基于同时建图与定位的泊车系统及方法 |
CN112937554A (zh) * | 2021-01-30 | 2021-06-11 | 惠州华阳通用电子有限公司 | 一种泊车方法及系统 |
CN113436275A (zh) * | 2021-07-12 | 2021-09-24 | 超级视线科技有限公司 | 一种基于标定板的泊位尺寸确定方法及系统 |
CN113592949A (zh) * | 2021-07-01 | 2021-11-02 | 广东工业大学 | 用于车辆无线泊车影像的控制系统及方法 |
CN113589685A (zh) * | 2021-06-10 | 2021-11-02 | 常州工程职业技术学院 | 一种基于深度神经网络的挪车机器人控制系统及其方法 |
CN113609148A (zh) * | 2021-08-17 | 2021-11-05 | 广州小鹏自动驾驶科技有限公司 | 一种地图更新的方法和装置 |
CN113753029A (zh) * | 2021-08-27 | 2021-12-07 | 惠州华阳通用智慧车载系统开发有限公司 | 一种基于光流法的自动泊车方法及系统 |
CN113781300A (zh) * | 2021-08-17 | 2021-12-10 | 东风汽车集团股份有限公司 | 一种用于远距离自主泊车的车辆视觉定位方法 |
CN113899377A (zh) * | 2021-08-23 | 2022-01-07 | 武汉光庭信息技术股份有限公司 | 一种基于相机的自动泊车终点相对坐标的测量方法及系统 |
CN114030463A (zh) * | 2021-11-23 | 2022-02-11 | 上海汽车集团股份有限公司 | 一种自动泊车系统的路径规划方法及装置 |
CN114179785A (zh) * | 2021-11-22 | 2022-03-15 | 岚图汽车科技有限公司 | 一种基于面向服务的融合泊车控制系统、电子设备和车辆 |
CN114241437A (zh) * | 2021-11-19 | 2022-03-25 | 岚图汽车科技有限公司 | 一种特定区域泊车系统、控制方法及其设备 |
CN114454872A (zh) * | 2020-11-10 | 2022-05-10 | 上汽通用汽车有限公司 | 泊车系统和泊车方法 |
CN114926820A (zh) * | 2022-06-09 | 2022-08-19 | 东风汽车集团股份有限公司 | 基于深度学习和图像帧优化的斜车位识别方法及系统 |
CN115235452A (zh) * | 2022-07-22 | 2022-10-25 | 上海师范大学 | 基于uwb/imu和视觉信息融合的智能泊车定位系统及方法 |
CN115903837A (zh) * | 2022-12-19 | 2023-04-04 | 湖州丽天智能科技有限公司 | 一种车载光伏机器人自动充电方法和系统 |
CN116229426A (zh) * | 2023-05-09 | 2023-06-06 | 华东交通大学 | 基于全景环视图像的无人驾驶泊车停车位检测方法 |
CN116772744A (zh) * | 2023-08-24 | 2023-09-19 | 成都量芯集成科技有限公司 | 一种基于激光测距和视觉融合的3d扫描装置及其方法 |
WO2024038687A1 (fr) * | 2022-08-19 | 2024-02-22 | Mitsubishi Electric Corporation | Système et procédé de commande de mouvement d'un véhicule |
CN118097623A (zh) * | 2024-04-22 | 2024-05-28 | 纽劢科技(上海)有限公司 | 基于深度学习的自动泊车障碍物接地线的检测方法、系统 |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110293966B (zh) * | 2019-06-28 | 2021-06-01 | 北京地平线机器人技术研发有限公司 | 车辆泊车控制方法,车辆泊车控制装置和电子设备 |
CN110751850B (zh) * | 2019-08-30 | 2023-03-07 | 的卢技术有限公司 | 一种基于深度神经网络的车位识别方法和系统 |
CN110705359B (zh) * | 2019-09-05 | 2023-03-03 | 北京智行者科技股份有限公司 | 一种车位检测方法 |
CN110606071A (zh) * | 2019-09-06 | 2019-12-24 | 中国第一汽车股份有限公司 | 一种泊车方法、装置、车辆和存储介质 |
CN110562248B (zh) * | 2019-09-17 | 2020-09-25 | 浙江吉利汽车研究院有限公司 | 一种基于无人机的自动泊车系统及自动泊车方法 |
CN111176288A (zh) * | 2020-01-07 | 2020-05-19 | 深圳南方德尔汽车电子有限公司 | 基于Reedsshepp全局路径规划方法、装置、计算机设备及存储介质 |
CN111274343B (zh) * | 2020-01-20 | 2023-11-24 | 阿波罗智能技术(北京)有限公司 | 一种车辆定位方法、装置、电子设备及存储介质 |
CN111291650B (zh) * | 2020-01-21 | 2023-06-20 | 北京百度网讯科技有限公司 | 自动泊车辅助的方法及装置 |
WO2021226772A1 (fr) * | 2020-05-11 | 2021-11-18 | 上海欧菲智能车联科技有限公司 | Procédé et appareil d'affichage de vue d'ambiance, dispositif informatique et support de stockage |
CN111678518B (zh) * | 2020-05-29 | 2023-07-28 | 南京市德赛西威汽车电子有限公司 | 一种用于修正自动泊车路径的视觉定位方法 |
CN112644479B (zh) * | 2021-01-07 | 2022-05-13 | 广州小鹏自动驾驶科技有限公司 | 一种泊车控制方法和装置 |
CN112660117B (zh) * | 2021-01-19 | 2022-12-13 | 广州小鹏自动驾驶科技有限公司 | 一种自动泊车方法、泊车系统、计算机设备及存储介质 |
CN114274948A (zh) * | 2021-12-15 | 2022-04-05 | 武汉光庭信息技术股份有限公司 | 一种基于360度全景的自动泊车方法及装置 |
CN114312759A (zh) * | 2022-01-21 | 2022-04-12 | 山东浪潮科学研究院有限公司 | 一种智能辅助停车的方法、设备及存储介质 |
CN118082811B (zh) * | 2024-04-23 | 2024-08-09 | 知行汽车科技(苏州)股份有限公司 | 一种泊车控制方法、装置、设备及介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102963355A (zh) * | 2012-11-01 | 2013-03-13 | 同济大学 | 一种智能辅助泊车方法及其实现系统 |
CN103600707A (zh) * | 2013-11-06 | 2014-02-26 | 同济大学 | 一种智能泊车系统的泊车位检测装置及方法 |
CN106114623A (zh) * | 2016-06-16 | 2016-11-16 | 江苏大学 | 一种基于人类视觉的自动泊车路径规划方法及系统 |
WO2017003052A1 (fr) * | 2015-06-29 | 2017-01-05 | 엘지전자 주식회사 | Procédé d'assistance à la conduite de véhicule et véhicule |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6319213B2 (ja) * | 2015-07-10 | 2018-05-09 | トヨタ自動車株式会社 | ハイブリッド車両の制御装置 |
-
2018
- 2018-09-17 CN CN201811079125.2A patent/CN109720340B/zh active Active
- 2018-11-02 WO PCT/CN2018/113658 patent/WO2020056874A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102963355A (zh) * | 2012-11-01 | 2013-03-13 | 同济大学 | 一种智能辅助泊车方法及其实现系统 |
CN103600707A (zh) * | 2013-11-06 | 2014-02-26 | 同济大学 | 一种智能泊车系统的泊车位检测装置及方法 |
WO2017003052A1 (fr) * | 2015-06-29 | 2017-01-05 | 엘지전자 주식회사 | Procédé d'assistance à la conduite de véhicule et véhicule |
CN106114623A (zh) * | 2016-06-16 | 2016-11-16 | 江苏大学 | 一种基于人类视觉的自动泊车路径规划方法及系统 |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111753639B (zh) * | 2020-05-06 | 2024-08-16 | 上海欧菲智能车联科技有限公司 | 感知地图生成方法、装置、计算机设备和存储介质 |
CN111753639A (zh) * | 2020-05-06 | 2020-10-09 | 上海欧菲智能车联科技有限公司 | 感知地图生成方法、装置、计算机设备和存储介质 |
CN111723659A (zh) * | 2020-05-14 | 2020-09-29 | 上海欧菲智能车联科技有限公司 | 泊车位确定方法、装置、计算机设备和存储介质 |
CN111723659B (zh) * | 2020-05-14 | 2024-01-09 | 上海欧菲智能车联科技有限公司 | 泊车位确定方法、装置、计算机设备和存储介质 |
CN111640062B (zh) * | 2020-05-15 | 2023-06-09 | 上海赫千电子科技有限公司 | 一种车载环视图像的自动拼接方法 |
CN111640062A (zh) * | 2020-05-15 | 2020-09-08 | 上海赫千电子科技有限公司 | 一种车载环视图像的自动拼接方法 |
CN111626348B (zh) * | 2020-05-20 | 2024-02-02 | 安徽江淮汽车集团股份有限公司 | 自动泊车测试模型构建方法、设备、存储介质及装置 |
CN111626348A (zh) * | 2020-05-20 | 2020-09-04 | 安徽江淮汽车集团股份有限公司 | 自动泊车测试模型构建方法、设备、存储介质及装置 |
CN111599217A (zh) * | 2020-06-04 | 2020-08-28 | 纵目科技(上海)股份有限公司 | 一种自主泊车系统架构、架构实现方法、终端和存储介质 |
CN111599217B (zh) * | 2020-06-04 | 2023-06-13 | 纵目科技(上海)股份有限公司 | 一种自主泊车系统架构、架构实现方法、终端和存储介质 |
CN111860228B (zh) * | 2020-06-30 | 2024-01-16 | 阿波罗智能技术(北京)有限公司 | 用于自主泊车的方法、装置、设备以及存储介质 |
CN111860228A (zh) * | 2020-06-30 | 2020-10-30 | 北京百度网讯科技有限公司 | 用于自主泊车的方法、装置、设备以及存储介质 |
CN112180373A (zh) * | 2020-09-18 | 2021-01-05 | 纵目科技(上海)股份有限公司 | 一种多传感器融合的智能泊车系统和方法 |
CN112180373B (zh) * | 2020-09-18 | 2024-04-19 | 纵目科技(上海)股份有限公司 | 一种多传感器融合的智能泊车系统和方法 |
CN114454872A (zh) * | 2020-11-10 | 2022-05-10 | 上汽通用汽车有限公司 | 泊车系统和泊车方法 |
CN112880696A (zh) * | 2021-01-13 | 2021-06-01 | 成都朴为科技有限公司 | 一种基于同时建图与定位的泊车系统及方法 |
CN112937554A (zh) * | 2021-01-30 | 2021-06-11 | 惠州华阳通用电子有限公司 | 一种泊车方法及系统 |
CN113589685A (zh) * | 2021-06-10 | 2021-11-02 | 常州工程职业技术学院 | 一种基于深度神经网络的挪车机器人控制系统及其方法 |
CN113589685B (zh) * | 2021-06-10 | 2024-04-09 | 常州工程职业技术学院 | 一种基于深度神经网络的挪车机器人控制系统及其方法 |
CN113592949B (zh) * | 2021-07-01 | 2024-03-29 | 广东工业大学 | 用于车辆无线泊车影像的控制系统及方法 |
CN113592949A (zh) * | 2021-07-01 | 2021-11-02 | 广东工业大学 | 用于车辆无线泊车影像的控制系统及方法 |
CN113436275A (zh) * | 2021-07-12 | 2021-09-24 | 超级视线科技有限公司 | 一种基于标定板的泊位尺寸确定方法及系统 |
CN113781300B (zh) * | 2021-08-17 | 2023-10-13 | 东风汽车集团股份有限公司 | 一种用于远距离自主泊车的车辆视觉定位方法 |
CN113609148A (zh) * | 2021-08-17 | 2021-11-05 | 广州小鹏自动驾驶科技有限公司 | 一种地图更新的方法和装置 |
CN113781300A (zh) * | 2021-08-17 | 2021-12-10 | 东风汽车集团股份有限公司 | 一种用于远距离自主泊车的车辆视觉定位方法 |
CN113899377A (zh) * | 2021-08-23 | 2022-01-07 | 武汉光庭信息技术股份有限公司 | 一种基于相机的自动泊车终点相对坐标的测量方法及系统 |
CN113899377B (zh) * | 2021-08-23 | 2023-10-27 | 武汉光庭信息技术股份有限公司 | 一种基于相机的自动泊车终点相对坐标的测量方法及系统 |
CN113753029B (zh) * | 2021-08-27 | 2023-11-17 | 惠州华阳通用智慧车载系统开发有限公司 | 一种基于光流法的自动泊车方法及系统 |
CN113753029A (zh) * | 2021-08-27 | 2021-12-07 | 惠州华阳通用智慧车载系统开发有限公司 | 一种基于光流法的自动泊车方法及系统 |
CN114241437A (zh) * | 2021-11-19 | 2022-03-25 | 岚图汽车科技有限公司 | 一种特定区域泊车系统、控制方法及其设备 |
CN114179785B (zh) * | 2021-11-22 | 2023-10-13 | 岚图汽车科技有限公司 | 一种基于面向服务的融合泊车控制系统、电子设备和车辆 |
CN114179785A (zh) * | 2021-11-22 | 2022-03-15 | 岚图汽车科技有限公司 | 一种基于面向服务的融合泊车控制系统、电子设备和车辆 |
CN114030463B (zh) * | 2021-11-23 | 2024-05-14 | 上海汽车集团股份有限公司 | 一种自动泊车系统的路径规划方法及装置 |
CN114030463A (zh) * | 2021-11-23 | 2022-02-11 | 上海汽车集团股份有限公司 | 一种自动泊车系统的路径规划方法及装置 |
CN114926820A (zh) * | 2022-06-09 | 2022-08-19 | 东风汽车集团股份有限公司 | 基于深度学习和图像帧优化的斜车位识别方法及系统 |
CN114926820B (zh) * | 2022-06-09 | 2024-07-12 | 东风汽车集团股份有限公司 | 基于深度学习和图像帧优化的斜车位识别方法及系统 |
CN115235452A (zh) * | 2022-07-22 | 2022-10-25 | 上海师范大学 | 基于uwb/imu和视觉信息融合的智能泊车定位系统及方法 |
WO2024038687A1 (fr) * | 2022-08-19 | 2024-02-22 | Mitsubishi Electric Corporation | Système et procédé de commande de mouvement d'un véhicule |
CN115903837B (zh) * | 2022-12-19 | 2023-09-29 | 湖州丽天智能科技有限公司 | 一种车载光伏机器人自动充电方法和系统 |
CN115903837A (zh) * | 2022-12-19 | 2023-04-04 | 湖州丽天智能科技有限公司 | 一种车载光伏机器人自动充电方法和系统 |
CN116229426A (zh) * | 2023-05-09 | 2023-06-06 | 华东交通大学 | 基于全景环视图像的无人驾驶泊车停车位检测方法 |
CN116772744B (zh) * | 2023-08-24 | 2023-10-24 | 成都量芯集成科技有限公司 | 一种基于激光测距和视觉融合的3d扫描装置及其方法 |
CN116772744A (zh) * | 2023-08-24 | 2023-09-19 | 成都量芯集成科技有限公司 | 一种基于激光测距和视觉融合的3d扫描装置及其方法 |
CN118097623A (zh) * | 2024-04-22 | 2024-05-28 | 纽劢科技(上海)有限公司 | 基于深度学习的自动泊车障碍物接地线的检测方法、系统 |
Also Published As
Publication number | Publication date |
---|---|
CN109720340B (zh) | 2021-05-04 |
CN109720340A (zh) | 2019-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020056874A1 (fr) | Système de stationnement automatique et procédé basé sur la reconnaissance visuelle | |
Qin et al. | Avp-slam: Semantic visual mapping and localization for autonomous vehicles in the parking lot | |
Cai et al. | Vision-based trajectory planning via imitation learning for autonomous vehicles | |
CN109733384A (zh) | 泊车路径设置方法及系统 | |
CN111037552B (zh) | 一种配电房轮式巡检机器人的巡检配置及实施方法 | |
CN107600067A (zh) | 一种基于多视觉惯导融合的自主泊车系统及方法 | |
WO2015024407A1 (fr) | Système de navigation à vision binoculaire basé sur un robot de puissance et procédé basé sur celui-ci | |
CN106272423A (zh) | 一种针对大尺度环境的多机器人协同制图与定位的方法 | |
CN112102369A (zh) | 水面漂浮目标自主巡检方法、装置、设备及存储介质 | |
Tripathi et al. | Trained trajectory based automated parking system using visual SLAM on surround view cameras | |
Bista et al. | Appearance-based indoor navigation by IBVS using line segments | |
CN110163963B (zh) | 一种基于slam的建图装置和建图方法 | |
AU2012323096A1 (en) | Method of calibrating a computer-based vision system onboard a craft | |
Alizadeh | Object distance measurement using a single camera for robotic applications | |
CN111612823A (zh) | 一种基于视觉的机器人自主跟踪方法 | |
CN106529466A (zh) | 一种基于仿生眼的无人驾驶车辆路径规划方法及系统 | |
CN110262487B (zh) | 一种障碍物检测方法、终端及计算机可读存储介质 | |
CN112344923A (zh) | 一种机器人的定位方法及其定位装置 | |
CN111161334A (zh) | 一种基于深度学习的语义地图构建方法 | |
CN111397609A (zh) | 路径规划方法、移动式机器及计算机可读介质 | |
CN107437071B (zh) | 一种基于双黄线检测的机器人自主巡检方法 | |
CN114379544A (zh) | 一种基于多传感器前融合的自动泊车系统、方法及装置 | |
CN111757021B (zh) | 面向移动机器人远程接管场景的多传感器实时融合方法 | |
CN117570960A (zh) | 一种用于导盲机器人的室内定位导航系统及方法 | |
CN111380535A (zh) | 基于视觉标签的导航方法、装置、移动式机器及可读介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18934336 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18934336 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18934336 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/11/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18934336 Country of ref document: EP Kind code of ref document: A1 |