Nothing Special   »   [go: up one dir, main page]

WO2020056874A1 - 一种基于视觉识别的自动泊车系统及方法 - Google Patents

一种基于视觉识别的自动泊车系统及方法 Download PDF

Info

Publication number
WO2020056874A1
WO2020056874A1 PCT/CN2018/113658 CN2018113658W WO2020056874A1 WO 2020056874 A1 WO2020056874 A1 WO 2020056874A1 CN 2018113658 W CN2018113658 W CN 2018113658W WO 2020056874 A1 WO2020056874 A1 WO 2020056874A1
Authority
WO
WIPO (PCT)
Prior art keywords
parking
vehicle
mapping
map
image
Prior art date
Application number
PCT/CN2018/113658
Other languages
English (en)
French (fr)
Inventor
姚聪
成悠扬
张家旺
汪路超
郑靖
陈壹
夏炎
Original Assignee
魔门塔(苏州)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 魔门塔(苏州)科技有限公司 filed Critical 魔门塔(苏州)科技有限公司
Publication of WO2020056874A1 publication Critical patent/WO2020056874A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/06Automatic manoeuvring for parking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces

Definitions

  • the present application belongs to the field of intelligent driving, and in particular relates to an automatic parking system based on visual recognition
  • the main technical routes for automatic parking technology are based on traditional path planning algorithms, such as RRT, PRM, A * and so on.
  • the basic idea is to identify the approximate location of the parking space by using ultrasound, and randomly generate paths, and then perform collision detection on the randomly generated paths, that is, to detect whether the path will pass through obstacles or whether the path is within the vehicle's driveable area.
  • Dijkstra's algorithm and other methods are used to select the optimal parking route.
  • An automatic parking system based on visual recognition characterized in that the system includes a mapping and positioning sub-module and a planning control sub-module;
  • the mapping and positioning sub-module obtains images of the periphery of the vehicle by using a camera disposed on the vehicle; wherein the images of the periphery of the vehicle are images stitched from the images obtained by each of the cameras; and the camera is a fisheye wide-angle
  • the camera's distortion correction formula is:
  • x cor x + x (k 1 r 2 + k 2 r 4 + k 3 r 6 ) + [2p 1 y + p 2 (r 2 + 2x 2 )], (1)
  • (x, y) is the original coordinate of a pixel in the image;
  • (x cor , y cor ) is the coordinate after the pixel is corrected for distortion;
  • [p 1 , p 2 ] is the tangential distortion parameter;
  • the mapping and positioning sub-module identifies a parking space point, a parking space line and / or a guide line from the image, and establishes a map;
  • the planning control sub-module uses the Reeds-Shepp curve to generate a smooth path for the map; and controls the vehicle to complete tracking the planned path through a proportional-integral-derivative (PID) control algorithm, The vehicle moves to the parking target.
  • PID proportional-integral-derivative
  • identifying the parking space, parking space line and / or guide line in the image is implemented by a deep learning algorithm.
  • the map is obtained by using an optimization algorithm through input of the parking space point, the parking space line, and / or the guide line information.
  • an automatic parking system based on visual recognition which is characterized in that: the system includes a mapping and positioning sub-module and a planning control sub-module;
  • the mapping and positioning sub-module obtains images of the surroundings of the vehicle by using a camera disposed on the vehicle;
  • the mapping and positioning sub-module identifies a parking space point, a parking space line and / or a guide line from the image, and establishes a map;
  • the planning control sub-module uses the Reeds-Shepp curve to generate a smooth path for the map; and controls the vehicle to complete tracking the planned path through a proportional-integral-derivative (PID) control algorithm, The vehicle moves to the parking target point;
  • PID proportional-integral-derivative
  • the mapping and positioning sub-module obtains local obstacle information by using ultrasonic waves provided on the vehicle; the mapping and positioning sub-module identifies a parking point, a parking line and / or a guide line from the image, and combines the local Obstacle information map.
  • a visual recognition-based automatic parking method includes the following steps: a mapping and positioning step and a planning control step;
  • the mapping and positioning step obtains an image of the surroundings of the vehicle by using a camera disposed on the vehicle; identifying a parking point, a parking line, and / or a guide line from the image to establish a map;
  • the planning control step uses the Reeds-Shepp curve to generate a smooth path for the map; and controls the vehicle to complete tracking of the planned path through a proportional-integral-derivative (PID) control algorithm to move the vehicle Move to the parking target point;
  • PID proportional-integral-derivative
  • the Reeds-Shepp curve used to generate a smooth path refers to the trajectory planned from the current position to the parking position using the Reeds-Shepp curve.
  • the parking space points, parking space lines and / or guide lines in the recognition image are implemented by a deep learning algorithm.
  • the map is obtained by using an optimization algorithm through input of the parking space point, the parking space line, and / or the guide line information.
  • the image around the vehicle is an image obtained by splicing the images obtained by the cameras.
  • the camera is a fisheye wide-angle camera, and its distortion correction formula is:
  • x cor x + x (k 1 r 2 + k 2 r 4 + k 3 r 6 ) + [2p 1 y + p 2 (r 2 + 2x 2 )], (1)
  • an automatic parking method based on visual recognition which is characterized in that the method includes the following steps: a mapping and positioning step and a planning control step;
  • the mapping and positioning step obtains an image of the surroundings of the vehicle by using a camera disposed on the vehicle; identifying a parking point, a parking line, and / or a guide line from the image to establish a map;
  • the planning control step uses the Reeds-Shepp curve to generate a smooth path for the map; and controls the vehicle to complete tracking of the planned path through a proportional-integral-derivative (PID) control algorithm to move the vehicle Move to the parking target point;
  • PID proportional-integral-derivative
  • the local obstacle information is obtained by using the ultrasound set on the vehicle; the mapping and positioning sub-module identifies a parking spot, a parking line and / or a guide line from the image, and combines it with Map local obstacle information.
  • the invention of the present invention lies in the following aspects, but is not limited to the following aspects:
  • the map is an information map incorporating local obstacles.
  • the information map incorporating local obstacles provides information guarantee for parking and improves the efficiency of parking planning.
  • the use of non-visual sensors, such as ultrasonic sensors, has a clear division of labor with visual sensors for the detected information. Because in theory vision sensors are capable of detecting local obstacles and information maps, but if only vision sensors are used, it will bring greater computational pressure and burden during the later artificial neural network calculations, affecting vehicle parking. Controlled speed. But these effects will not appear in traditional vehicle control, because the traditional auxiliary vehicle control and planning rely on neural networks is very low. However, precisely because the present invention is different from sensor-assisted vehicle control and planning, it uses Reeds-Shepp curves and neural networks to identify parking spots, parking lines, and / or guide lines.
  • the present invention requires Which information to choose, the Reeds-Shepp curve used in the present invention is specially designed for the identified parking space points, parking space lines and / or guidance lines.
  • the identification of parking space points, parking space lines and / or guidance lines is effective, but The visual sensor method that forms the curve cannot be simply transferred to the obstacle recognition. In actual experiments, the recognition effect of the obstacle is poor, and the ultrasonic sensor for the obstacle is better than the visual sensor.
  • the use of ultrasonic sensors instead of vision sensors in the local obstacle information section can effectively reduce the pressure of later data processing, while also exerting the advantages of the ultrasonic sensors for detecting obstacle information in the visual field. Even if the ultrasonic sensor is used to selectively detect only local obstacles, the other vision sensors are still subjected to rigorous experiments and detection, not a simple superposition of the two sensors. This is one of the inventive points of the present invention.
  • the distortion correction formula adopted by the present invention takes into account the position of the fish-eye wide-angle camera on the data acquisition vehicle, and is different from the existing fish-eye image correction.
  • the camera position is placed in the storage area when these parking space point, parking line, and guide line are close to the imaging center.
  • FIG. 1 is a functional block diagram of an automatic parking provided by an embodiment of the present invention
  • FIG. 2 is an example diagram of a Reeds-Shepp curve provided by an embodiment of the present invention.
  • FIG. 3 is a flowchart of a Reeds-Shepp curve planning based on real-time environment information according to an embodiment of the present invention.
  • FIG. 1 shows a functional block diagram of an implementation of an automatic parking system based on deep learning provided by an embodiment of the present invention, including a mapping and positioning sub-module and a planning control sub-module. The details are as follows:
  • This module is mainly used for feature extraction and parking space positioning of the collected images to obtain obstacle maps and related parameter information.
  • the module is divided into four steps: step one, first correct distortion and inverse perspective transformation of the images collected by four fisheye cameras located around the vehicle, and then stitch the surrounding image to obtain a complete ring view; step two, combine Mass-labeled look-around mosaics, using deep learning algorithms for parking space identification and visual feature extraction; step three, simultaneous location mapping (SLAM); step four, fusion of ultrasonic information to obtain obstacle maps, in order to provide later path planning and complete parking Map information.
  • step one first correct distortion and inverse perspective transformation of the images collected by four fisheye cameras located around the vehicle, and then stitch the surrounding image to obtain a complete ring view
  • step two combine Mass-labeled look-around mosaics, using deep learning algorithms for parking space identification and visual feature extraction
  • step three simultaneous location mapping (SLAM)
  • step four fusion of ultrasonic information to obtain obstacle maps, in order to provide later path planning and complete parking Map information.
  • Step 1 Use four fisheye cameras located on the front, rear, left, and right of the vehicle to ensure that the images collected by the cameras cover a 360-degree area around the vehicle, and that the images collected by two adjacent cameras should have overlapping areas . Due to the large distortion of the images collected by the fisheye camera, the image distortion correction and inverse perspective transformation of the de-distortion need to be performed first, and the surround-view stitching algorithm is run to obtain a two-dimensional top-view surround-view mosaic.
  • the camera in the present invention is a fish-eye wide-angle camera. Due to the large distortion of the images collected by the fisheye camera, the distortion of the collected image information must first be corrected.
  • equation (1) can be used to correct the distortion of the four images collected:
  • x cor x + x (k 1 r 2 + k 2 r 4 + k 3 r 6 ) + [2p 1 y + p 2 (r 2 + 2x 2 )], (1)
  • (x, y) is the original coordinate of a pixel in the image; (x cor , y cor ) is the coordinate after the pixel is corrected for distortion; Is the radial distortion parameter; [p 1 , p 2 ] is the tangential distortion parameter.
  • the selection of the distortion parameters here takes into consideration the needs of the fisheye camera to capture a 360-degree area around the vehicle, as well as the parking space points, parking space lines and / or guide lines. For example, fisheye cameras are located on the front, rear, left, and right sides of the vehicle to ensure 360-degree shooting without dead angles. In addition, parking spots, parking lines, and / or guide lines should be located at the center of the image as much as possible for clear imaging.
  • the two parameters p 1 and p 2 of the tangential distortion are linearly superimposed, and are corrected in a 2p 1 y + p 2 (r 2 + 2x 2 ) manner.
  • the inverse perspective transformations are performed on the four corrected distortion images respectively, that is, the position correspondence between points in the image coordinate system and points on a known plane in the three-dimensional world coordinate system is established.
  • the geometric center point of the vehicle can vertically downward projection on the ground as the coordinate origin O w; Y w axis direction parallel to the direction of the rear axle, left side points of the vehicle is positive; X w Y w axis perpendicular, It is positive to point in front of the vehicle; it is positive to the Z w axis when it is perpendicular to the ground.
  • This process includes: First, set the field of view of the ring view. That is to determine the zoom factor of the bird's-eye view;
  • the first image in the mapping and positioning sub-module shown in FIG. 1 is a ring view generated by splicing.
  • Step 2 Through the deep learning algorithm, the parking space points, parking space lines, guide lines and other information are identified.
  • the labeling information includes parking space points, parking space lines, and guide lines.
  • a supervised learning strategy is adopted, and a deep learning algorithm is used to design and learn parking space information identification network models.
  • the layer network extracts distinguishable visual features and recognizes the parking space information in the ring view.
  • the main reason for using deep learning algorithms to extract visual features such as parking space information is that deep convolutional neural networks not only have their unique local perception and parameter sharing advantages in processing images, but also use supervised learning network models with massively labeled data. Performance and robust performance are also big advantages.
  • the input of this parking space recognition network model is a ring view
  • the labeled model is used to supervise the network model to learn the visual features of the ring view regarding parking space points, parking space lines, and guide lines.
  • the network output is a segmentation result map, specifically, and Input a mosaic image of the same resolution.
  • the semantic attribute includes the attributes of the parking spot, the attributes of the parking line, and the attributes of the guide line.
  • the information of the target parking space including the position of the parking space in the local map, the length, width, and angle of the parking space, is obtained using the parking space points and parking space lines identified by the parking space identification network model.
  • the semantic information of each pixel is obtained by a neural network model, and the vector attributes of the parking space are extracted from the pixel positions of the parking point attributes and the pixel positions of the parking line attributes.
  • the target parking position and Target parking course and parking space width, length, and angle are extracted from the pixel positions of the parking point attributes and the pixel positions of the parking line attributes.
  • the target parking position and Target parking course and parking space width, length, and angle are determined whether there is an obstacle in each of the parking spaces visually recognized. If there are obstacles, it is determined that the parking space cannot be parked or is not empty.
  • step three the visual information obtained from the deep learning algorithm is used as an input, and the optimization algorithm is used to obtain the vehicle pose and the local map established after the task is started.
  • a Gauss-Newton optimization algorithm is used to obtain the parameter of the best matching position between the current real-time segmentation map and the local map as the pose result.
  • Step four Obtain local obstacle information through the ultrasound information, and integrate the obstacle information into the map.
  • the ultrasonic information is used to detect an empty parking space.
  • the distance information between the side ultrasonic wave and the obstacle was detected in real time during the process of building the map, and the position of the obstacle in the local map was calculated by combining the pose and position.
  • an obstacle information map is obtained, which includes the target parking space. If there are multiple parking spaces, all parking spaces will be displayed to the user in the form of a human-computer interaction interface, and the user will automatically select the target parking space.
  • Path planning is the main strategy for solving automatic parking.
  • the present invention adopts a path tracking method, generates a path in advance, and then uses a controller to perform path tracking.
  • step one path planning, using the Reeds-Shepp curve to generate a smooth path for the obstacle information map
  • step two control the vehicle through the proportional-integral-derivative (PID) control algorithm to complete the planning. Tracking of the track; step three, move the vehicle to the parking target point, and the parking task ends.
  • PID proportional-integral-derivative
  • Step 1 For the map fused with local obstacle information, according to the updated environmental information, adaptively call the Reeds-Shepp curve to generate candidate parking paths. This method is an invention point of the present invention.
  • the principle of the automatic parking planning technology in the present invention is that during the parking process, as the vehicle gets closer to the parking target position, the surrounding environment information becomes more accurate and complete.
  • the parking environment is updated, When there is a large difference in the previous parking environment, the surrounding environment information of the parking is updated, and a Reeds-Shepp curve is planned to track the current position to the parking position. This mechanism can ensure that Reeds-Shepp curve planning is called in real time to achieve accurate planning.
  • the Reeds-Shepp curve can generate a trajectory from any starting pose (x 0 , y 0 , theta0) to any ending pose (x l , y l , thetal) according to the vehicle kinematics model.
  • Reeds-Shepp curve is composed of several arcs or straight segments with a fixed radius, and the radius of the arc is generally the minimum turning radius of the car.
  • the path length here refers to the length of the center trajectory of the rear axle of the car, which is the sum of the arc length of all arcs and the length of the straight line segment.
  • Reeds-Shepp curve is a geometric planning method, which usually consists of the following basic types:
  • C represents a circular arc trajectory
  • represents a gear shift
  • S represents a straight line segment
  • represents a turning arc specifying the segment trajectory; in some cases, a subscript of ⁇ / 2 will be given because the curve must follow the turning exactly When the radian is ⁇ / 2.
  • Table 1 shows six kinds of motion primitives, which can construct all the best Reeds-Shepp curves.
  • L and R represent left and right turn respectively; + and-represent forward and reverse gear respectively.
  • Type 48 should be subdivided class removing C
  • C is (L - R + L -) and (R - L + R -) categories, only 46 of the remaining classes.
  • Step two through the above Reeds-Shepp curve generation method, a path planning strategy is obtained. Then the vehicle is controlled by the PID control algorithm to track the planned trajectory.
  • controlling the driving of the vehicle will continuously update the parking environment around the vehicle as the vehicle travels, so it is necessary to track and update the planned parking trajectory in real time.
  • an environmental difference threshold is first set, and the difference between the historical environment information and the real-time environment information is used to determine whether to update the parking trajectory. If the environmental difference is greater than the set threshold, that is, the surrounding parking environment has relatively obvious changes, the Reeds-Shepp curve planning should be performed on the newly acquired images; if the environmental difference is not large, that is, the surrounding parking environment is not relatively obvious Changes, the existing path planning is maintained.
  • step three the vehicle follows the real-time planned trajectory to the parking target point, and the parking task ends.
  • the traditional parking space recognition method is used to solve the problems of inaccurate identification of the parking space position by relying solely on ultrasound and a small number of scenes.
  • scenes with data annotation can be covered, and the parking space recognition rate is over 95%, and the recognition error is less than 3 pixels.
  • the Gauss-Newton optimization algorithm is used to process the visual segmentation results. The obtained pose information makes up for the problem of poor pose estimation accuracy caused by no visual feedback during trajectory tracking.
  • modules or steps of the embodiments of the present invention described above may be implemented by a general-purpose computing device, and they may be centralized on a single computing device or distributed to multiple computing devices.
  • they can be implemented with program code executable by a computing device, so that they can be stored in a storage device and executed by the computing device, and in some cases, can be different from here
  • the steps shown or described are performed sequentially, or they are separately made into individual integrated circuit modules, or multiple modules or steps in them are made into a single integrated circuit module to implement. In this way, the embodiments of the present invention are not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种自动泊车系统及泊车方法,属于智能驾驶领域;现有技术中,自动泊车系统基于传统的路径规划算法,效果较差,本技术方案提供了一种基于视觉识别的自动泊车系统,系统包括建图定位子模块和规划控制子模块,由于采用的地图融合了局部障碍的信息地图,提高了自动泊车系统对非正常状态的适应性;此外通过深度学习获得视觉信息并根据环境的更新实时调用Reeds-Shepp曲线规划技术,与无视觉反馈的位姿规划这一技术相比,提高了位姿估算精度及泊车系统的响应速度。

Description

一种基于视觉识别的自动泊车系统及方法 技术领域
本申请属于智能驾驶领域,具体涉及一种基于视觉识别的自动泊车系统
背景技术
目前对于自动泊车技术主要的技术路线是基于传统的路径规划算法,如RRT、PRM、A*等。其基本思路是通过超声波识别出车位的大致位置,随机产生路径,之后对随机生成的路径进行碰撞检测,即检测路径是否会穿过障碍物,或路径是否在车辆可行驶区域内。再在所有可行的路径中,利用迪杰斯特拉算法等方法选择其中最优的停车路径。
但是,上述现有技术存在以下缺陷:
(1)单纯依靠超声波不能准确的识别车位位置,不能处理除平行泊车、垂直泊车之外的泊车场景;这带来了泊车条件的局限。
(2)整个泊车功能启动后,没有视觉反馈的位姿估算精度较差;
发明内容
鉴于现有技术中存在的问题,本发明采用以下技术方案:
一种基于视觉识别的自动泊车系统,其特征在于:所述系统包括建图定位子模块和规划控制子模块;
所述建图定位子模块利用设置在车辆的摄像头获取所述车辆四周的图像;其中所述车辆四周的图像是由各所述摄像头获得的图像拼接而成的图像;所述摄像头为鱼眼广角摄像头,其畸变校正公式为:
x cor=x+x(k 1r 2+k 2r 4+k 3r 6)+[2p 1y+p 2(r 2+2x 2)],          (1)
其中(x,y)是图像中某一像素点的原始坐标;(x cor,y cor)是此像素点校正畸变后的坐标;
Figure PCTCN2018113658-appb-000001
是径向畸变参数;[p 1,p 2]是切向畸变参数;
所述建图定位子模块从所述图像中识别出车位点、车位线和/或引导线,建立地图;
所述规划控制子模块针对所述地图,采用Reeds-Shepp曲线生成平滑的路径;通过比例-积分-微分(PID)控制算法控制所述车辆完成对规划出的所述路径的跟踪,将所述车辆运动到泊车目标点。
优选的:识别图像中的车位、车位线和/或引导线是通过深度学习算法实现的。
优选的,所述地图是通过所述车位点、所述车位线和/或引导线信息的输入,使用优化算法得到的。
根据本发明的另一方面,提供一种基于视觉识别的自动泊车系统,其特征在于:所述系统包括建图定位子模块和规划控制子模块;
所述建图定位子模块利用设置在车辆的摄像头获取所述车辆四周的图像;
所述建图定位子模块从所述图像中识别出车位点、车位线和/或引导线,建立地图;
所述规划控制子模块针对所述地图,采用Reeds-Shepp曲线生成平滑的路径;通过比例-积分-微分(PID)控制算法控制所述车辆完成对规划出的所述路径的跟踪,将所述车辆运动到泊车目标点;
所述建图定位子模块利用设置在所述车辆上超声波得到局部障碍信息;所述建图定位子模块从所述图像中识别出车位点、车位线和/或引导线,并结合所述局部障碍信息建立地图。
根据本发明的另一方面,一种基于视觉识别的自动泊车方法,所述方法包括以下步骤:建图定位步骤和规划控制步骤;
所述建图定位步骤利用设置在车辆的摄像头获取所述车辆四周的图像;从所述图像中识别出车位点、车位线和/或引导线,建立地图;
所述规划控制步骤针对所述地图,采用Reeds-Shepp曲线生成平滑的路径;通过比例-积分-微分(PID)控制算法控制所述车辆完成对规划出的所述路径的跟踪,将所述车辆运动到泊车目标点;其中所述采用Reeds-Shepp曲线生成平滑的路径是指采用一次Reeds-Shepp曲线规划到当前位置到停车位置的轨迹。。
优选的,识别图像中的车位点、车位线和/或引导线是通过深度学习算法实现的。
优选的,所述地图是通过所述车位点、所述车位线和/或引导线信息的输入,使用优化算法得到的。
优选的,所述车辆四周的图像是由各所述摄像头获得的图像拼接而成的图像。
优选的,所述摄像头为鱼眼广角摄像头,其畸变校正公式为:
x cor=x+x(k 1r 2+k 2r 4+k 3r 6)+[2p 1y+p 2(r 2+2x 2)],       (1)
其中(x,y)是图像中某一像素点的原始坐标;(x cor,y cor)是此像素点校正畸变后的坐标;
Figure PCTCN2018113658-appb-000002
是径向畸变参数;[p 1,p 2]是切向畸变参数。
根据本发明的另一方面,提供一种基于视觉识别的自动泊车方法,其特征在于:所述方法包括以下步骤:建图定位步骤和规划控制步骤;
所述建图定位步骤利用设置在车辆的摄像头获取所述车辆四周的图像;从所述图像中识别出车位点、车位线和/或引导线,建立地图;
所述规划控制步骤针对所述地图,采用Reeds-Shepp曲线生成平滑的路径;通过比例-积分-微分(PID)控制算法控制所述车辆完成对规划出的所述路径的跟踪,将所述车辆运动到泊车目标点;
在所述建图定位步骤中,利用设置在所述车辆上超声波得到局部障碍信息;所述建图定位子模块从所述图像中识别出车位点、车位线和/或引导线,并结合所述局部障碍信息建立地图。
本发明的发明点在于下述的几个方面,但不仅限于下述的几个方面:
(1)地图为融合了局部障碍的信息地图。采用了该融合了局部障碍的信息地图为泊车提供了信息保障,提高了泊车规划的效率。采用非视觉传感器,例如超声传感器,其与视觉传感器对于检测的信息有明确的分工。因为理论上视觉传感器在检测局部障碍和信息地图上都是可以胜任的,但如果只采用视觉传感器,则会在后期人工智能神经网络计算时带来较大的计算压力和负担,影响车辆泊车控制的速度。但这些影响并不会出现在传统的车辆控制之中,因为传统辅助车辆控制与规划时对于神经网络的依赖度很低。但正是因为本发明有别于传感的辅助车辆控制与规划,采用了Reeds-Shepp曲线以及神经网络对车位点、车位线和/或引导线的识别,因此本发明需要对采用视觉传感器获得哪些信息进行取舍,本发明中采用的Reeds-Shepp曲线是专门针对已经识别的车位点、车位线和/或引导线而专门设计的,识别车位点、车位线和/或引导线效果好,但采用形成该曲线的视觉传感器方法也不能简单转用于障碍物的识别,在实际实验中对于障碍物的识别效果差,而对于障碍物用超声传感器比视觉传感器更好。经过技术人员长期的实验和经验总结发现,局部障碍信息这部分采用超声传感器替代视觉传感器,在有效降低后期数据处理压力的同时,也可以发挥出超声传感器对于视野内障碍信息检测的优势。即用超声传感器有选择的只检测局部障碍,其他仍采用视觉传感器是经过严格的实验和检测的,并不是两种传感器的简单叠加。这是本发明的发明点之一。
(2)基于深度学习的车位检测,提高了车位识别场景的覆盖度,同时,融合视觉反馈的信息提高了位姿估算的精度。这是本发明的发明点之一。
(3)本发明由于采用的是鱼眼广角摄像头,因此必须进行鱼眼摄像头拍摄图像的畸变校正。本发明所采用的畸变校正公式考虑到了鱼眼广角摄像头在数据采集车上的位置,并不同于现有的鱼眼图像校正。例如在数据采集车中,为了得到车位点、车位线和/或引导线的清晰图像信息,摄像头位置摆入在入库时候这些车位点、车位线、引导线接近在成像中心的位置,正是考虑到这一点,在畸变校正时将切向畸变的两个参数线性叠加地进行修正。详细的畸变校正方法参见下述的具体实施例。将畸变校正与泊车用摄像头位置联系起来。这是本发明的 发明点之一。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,并不构成对本发明的限定。在附图中:
图1是本发明实施例提供的自动泊车功能框图;
图2是本发明实施例提供的Reeds-Shepp曲线示例图;
图3是本发明实施例提供的基于实时环境信息的Reeds-Shepp曲线规划流程图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚明白,下面结合实施方式和附图,对本发明做进一步详细说明。在此,本发明的示意性实施方式及其说明用于解释本发明,但并不作为对本发明的限定。
图1出示了本发明实施例提供的基于深度学习的自动泊车系统的实现功能框图,包括建图定位子模块与规划控制子模块,详述如下:
1.建图定位子模块
该模块主要用于对采集到的图像进行特征提取与车位定位,得到障碍地图及相关参数信息。该模块共分为四个步骤:步骤一,首先对位于车辆四周的四个鱼眼摄像机采集到的图像进行畸变矫正及逆透视变换,然后通过环视图像拼接得到完整的环视图;步骤二,结合海量标注环视拼接图,利用深度学习算法对车位识别及视觉特征提取;步骤三,同时定位建图(SLAM);步骤四,融合超声波信息,得到障碍地图,以便对后期路径规划、完成泊车提供地图信息。详细描述如下:
步骤一,利用位于车辆前侧、后侧、左侧、右侧的四个鱼眼摄像头保证摄像头采集到的图像覆盖到车辆四周360度区域,且相邻两摄像头采集到的图像应有重叠区域。由于鱼眼摄像头采集到的图像存在较大畸变,首先需要对其去畸变进行图像复原矫正和逆透视变换,运行环视拼接算法,得到二维俯视的环视拼接图。
为获得较大的视野范围,本发明中的摄像头为鱼眼广角摄像头。由于鱼眼摄像头采集到的图像存在较大畸变,首先要对采集到的图像信息进行校正畸变。
根据标定得到的摄像机畸变参数,采用棋盘格角点识别及非线性拟合方法进行畸变校正。具体的可以利用式(1)对采集到的四幅图像分别进行校正畸变:
x cor=x+x(k 1r 2+k 2r 4+k 3r 6)+[2p 1y+p 2(r 2+2x 2)],      (1)
其中(x,y)是图像中某一像素点的原始坐标;(x cor,y cor)是此像素点校正畸变后的坐标;
Figure PCTCN2018113658-appb-000003
是径向畸变参数;[p 1,p 2]是切向畸变参数。这里的畸变参数在选择上考虑了鱼眼摄像头为了拍摄覆盖到车辆四周360度区域以及车位点、车位线和/或引导线的需要。例如鱼眼摄像头位于车辆前侧、后侧、左侧、右侧以保证360度无死角拍摄,另外车位点、车位线和/或引导线为了成像清晰,尽量位于图像中心位置。具体的对于切向畸变的两个参数p 1和p 2线性叠加,以2p 1y+p 2(r 2+2x 2)方式进行修正。
对矫正畸变后的四幅图像分别进行逆透视变换,即建立图像坐标系中的点与三维世界坐标系中已知平面上的点的位置对应关系。
具体地,选取车辆几何中心点竖直向下投影在地面上的点作为坐标原点O w;Y w轴方向平行于车辆后轴方向,指向车辆左侧为正;X w垂直于Y w轴,指向车辆前方为正;垂直于地面向上为Z w轴正方向。将此坐标系作为世界坐标系。现假定Z w=0,即假定图像中的所有点在三维世界坐标系中都位于地面上,利用四个鱼眼摄像机的内参矩阵和外参矩阵,对四个鱼眼摄像机采集到的图像分别进行逆透视变换,得到俯视效果的鸟瞰图。
由逆透视变换得到四幅俯视效果的鸟瞰图,通过对齐重叠区域,可以将四幅鸟瞰图拼接为环视图。
这一过程包括:首先,设定环视图的视野范围。即确定鸟瞰图的缩放因子;
然后,确定相邻两张图像的重叠对应位置及拼缝。选四幅图两两相邻之间的重叠区域内的四条直线作为拼接缝;
最后,将四幅图按照拼接缝的位置剪裁并拼接。
图1示出的建图定位子模块中第一幅图为拼接生成的一张环视图。
步骤二,通过深度学习算法,识别出拼接图中的车位点、车位线、引导线等信息。
具体地,首先结合海量的人工手动标注环视图,标注信息包括车位点、车位线、引导线,采取监督学习的策略,利用深度学习算法,设计并学习车位信息识别网络模型,该网络模型通过多层网络提取具有可区分性的视觉特征,对环视图中的车位信息进行识别。
使用深度学习算法提取车位信息等视觉特征的主要原因在于,深度卷积神经网络不仅在处理图像上有其独特局部感知和参数共享等优点,利用海量标注数据有监督的学习网络模型,其自适应性和鲁棒性能也是一大优点。
具体地,此车位识别网络模型的输入为环视图,使用标注信息监督网络模型学习得到该环视图有关车位点、车位线、引导线的视觉特征,网络输出为一幅分割结果图,具体为和输入拼接图同分辨率的图片,对于图片中的每一个像素点都存在一个语义属性,语义属性包含 车位点属性、车位线属性、引导线属性。
利用上述车位识信息别网络模型识别出的车位点及车位线获得目标车位的信息,包括车位在局部地图中的位置,车位长宽和角度。
具体地,通过神经网络模型得到每个像素点的语义信息,通过车位点属性的像素位置与车位线属性的像素位置提取出车位线的矢量属性,结果车位线的矢量属性计算出目标停车位置和目标停车航向及车位宽度、长度、角度。结合超声波障碍地图判断每个视觉识别出的车位位置是否有障碍物,如有障碍物则判断改车位不可停或非空车位。
步骤三,利用深度学习算法中得到的视觉信息作为输入,使用优化算法得到当前时刻的车辆位姿及任务启动后建立的局部地图。
具体地,以车辆位姿为优化参数,使用高斯牛顿优化算法得出当前实时分割图与局部地图的最佳匹配位置的参数作为位姿结果。
步骤四,通过超声波信息得到局部的障碍信息,将障碍信息融入到地图中。
具体地,该超声波信息用于检测空车位。建立地图过程中实时探测到侧边超声波与障碍物之间的距离信息,结合位姿,计算出障碍物在局部地图中的位置。
至此,得到障碍信息地图,该地图包括目标泊车车位。如果有多个停车位,所有停车位都会以人机交互界面的方式展示给用户,用户自主选择目标停车位。
2.规划控制子模块
此模块主要作用是根据障碍地图,规划路径并完成最终泊车任务。路径规划是解决自动泊车的主要策略。本发明采用路径跟踪方法,预先生成好路径,然后运用控制器进行路径跟踪。
此模块共分为三个步骤:步骤一,路径规划,针对障碍信息地图,采用Reeds-Shepp曲线生成平滑的路径;步骤二,通过比例-积分-微分(PID)控制算法控制车辆完成对规划出的轨迹的跟踪;步骤三,将车辆运动到泊车目标点,泊车任务结束。详细步骤描述如下:
步骤一,针对融合局部障碍信息的地图,根据更新的环境信息,自适应的调用Reeds-Shepp曲线生成候选泊车路径。此方法为本发明的一个发明点。
本发明中的自动泊车规划技术原理是在泊车过程中,随着车辆距离泊车目标位置越来越近,泊车周围环境信息越来越精确及完善,当更新后的泊车环境与之前的泊车环境相差比较大时,则更新泊车周围环境信息,并进行一次Reeds-Shepp曲线规划到当前位置到停车位置的轨迹。这种机制能够保证实时调用Reeds-Shepp曲线规划达到精确规划的目的。
Reeds-Shepp曲线可以生成符合车辆运动学模型的从任意起始位姿(x 0,y 0,theta0)到任 意终点位姿(x l,y l,thetal)的轨迹。
具体地,Reeds-Shepp曲线由几段半径固定的圆弧或直线段拼接组成,而且圆弧的半径一般是汽车的最小转向半径。这里的路径长度是指汽车后轴中心运动轨迹的长度,也就是所有圆弧的弧长和直线段的长度之和。Reeds-Shepp曲线是一种几何的规划方法,通常由如下几种基本类型构成:
{C|C|C,CC|C,CC|C,CSC,CC β|C βC,C|C βC β|C,
C|C π/2SC,CSC π/2|C,CC π/2SC π/2|C}
其中,C表示圆弧轨迹;|表示档位变换;S表示直线段,β表示指定该段轨迹的转向弧度;在某些情况下会给出π/2的下标,因为曲线必须精确遵循转向弧度为π/2的情况。
表1为六种运动基元,可以构建所有最佳的Reeds-Shepp曲线。
Symbol Gear:u 1 Steering:u 2
S + 1 0
S - -1 0
L + 1 1
L - -1 1
R + 1 -1
R - -1 -1
表1六种运动基元
其中L和R分别代表左转与右转;+与-分别代表前进和倒挡。
对于曲线和直线段,根据转向和档位位置细分出如上六种类型。通过细分基本类型得到如下表2所有子类型:
Figure PCTCN2018113658-appb-000004
表2六种运动基元细分基本类型
细分类型应该为48类,去除C|C|C的(L -R +L -)和(R -L +R -)两类,只显示剩余的46类。
具体地,以图2为例,以q I为起点,水平向右为正方向,q G为终点,垂直向上为正方向,便可以使用
Figure PCTCN2018113658-appb-000005
类型的轨迹规划策略,规划结果如图2所示。
步骤二,通过上述Reeds-Shepp曲线生成方法,得到规划路径策略。然后通过PID控制算法控制车辆完成对规划出的轨迹的跟踪。
具体地,在PID控制算法下,控制车辆行驶,会随着车辆的行驶不断的更新车辆四周泊车环境,所以要实时的跟踪并更新规划的泊车轨迹。
具体地,如图3所示,首先设置环境差值门限,利用历史环境信息与实时环境信息差异,决定是否要更新泊车轨迹。若环境差值大于设置的门限,即周围泊车环境有相对明显变化,则要对最新采集到的图像执行Reeds-Shepp曲线规划;若环境差值并不大,即周围泊车环境没有相对明显的变化,则保持现有路径规划。
步骤三,车辆根据实时规划的轨迹跟踪运动到泊车目标点,泊车任务结束。
本发明实施例中,一方面基于海量的带标注的图像数据库,基于先进的深度学习算法,解决了传统车位识别方法中单纯依靠超声波不能准确识别车位位置,覆盖场景少的问题。本实施例中,有数据标注的场景都能覆盖,且车位识别率高达95%以上,识别误差小于3个像素;另一方面,基于深度学习的分割结果,利用高斯牛顿优化算法处理视觉分割结果得到的位姿信息,弥补了轨迹跟踪过程中无视觉反馈导致的位姿估算精度较差的问题。
显然,本领域的技术人员应该明白,上述的本发明实施例的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明实施例不限制于任何特定的硬件和软件结合。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明实施例可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种基于视觉识别的自动泊车系统,其特征在于:所述系统包括建图定位子模块和规划控制子模块;
    所述建图定位子模块利用设置在车辆的摄像头获取所述车辆四周的图像;其中所述车辆四周的图像是由各所述摄像头获得的图像拼接而成的图像;所述摄像头为鱼眼广角摄像头,其畸变校正公式为:
    x cor=x+x(k 1r 2+k 2r 4+k 3r 6)+[2p 1y+p 2(r 2+2x 2)],        (1)
    其中(x,y)是图像中某一像素点的原始坐标;(x cor,y cor)是此像素点校正畸变后的坐标;
    Figure PCTCN2018113658-appb-100001
    [k 1,k 2,k 3]是径向畸变参数;[p 1,p 2]是切向畸变参数;
    所述建图定位子模块从所述图像中识别出车位点、车位线和/或引导线,建立地图;
    所述规划控制子模块针对所述地图,采用Reeds-Shepp曲线生成平滑的路径;通过比例-积分-微分(PID)控制算法控制所述车辆完成对规划出的所述路径的跟踪,将所述车辆运动到泊车目标点。
  2. 根据权利要求1所述的系统,其特征在于:识别图像中的车位、车位线和/或引导线是通过深度学习算法实现的。
  3. 根据权利要求1-2中任一项所述的系统,其特征在于:所述地图是通过所述车位点、所述车位线和/或引导线信息的输入,使用优化算法得到的。
  4. 一种基于视觉识别的自动泊车系统,其特征在于:所述系统包括建图定位子模块和规划控制子模块;
    所述建图定位子模块利用设置在车辆的摄像头获取所述车辆四周的图像;
    所述建图定位子模块从所述图像中识别出车位点、车位线和/或引导线,建立地图;
    所述规划控制子模块针对所述地图,采用Reeds-Shepp曲线生成平滑的路径;通过比例-积分-微分(PID)控制算法控制所述车辆完成对规划出的所述路径的跟踪,将所述车辆运动到泊车目标点;
    所述建图定位子模块利用设置在所述车辆上超声波得到局部障碍信息;所述建图定位子模块从所述图像中识别出车位点、车位线和/或引导线,并结合所述局部障碍信息建立地图。
  5. 一种基于视觉识别的自动泊车方法,其特征在于:所述方法包括以下步骤:建图定位步骤和规划控制步骤;
    所述建图定位步骤利用设置在车辆的摄像头获取所述车辆四周的图像;从所述图像中识别出车位点、车位线和/或引导线,建立地图;
    所述规划控制步骤针对所述地图,采用Reeds-Shepp曲线生成平滑的路径;通过比例-积分-微分(PID)控制算法控制所述车辆完成对规划出的所述路径的跟踪,将所述车辆运动到泊车目标点;其中所述采用Reeds-Shepp曲线生成平滑的路径是指采用一次Reeds-Shepp曲线规划到当前位置到停车位置的轨迹。。
  6. 根据权利要求5所述的方法,其特征在于:识别图像中的车位点、车位线和/或引导线是通过深度学习算法实现的。
  7. 根据权利要求5-6中任一项所述的方法,其特征在于:所述地图是通过所述车位点、所述车位线和/或引导线信息的输入,使用优化算法得到的。
  8. 根据权利要求5-7中任一项所述的方法,其特征在于:所述车辆四周的图像是由各所述摄像头获得的图像拼接而成的图像。
  9. 根据权利要求5所述的方法,其特征在于:所述摄像头为鱼眼广角摄像头,其畸变校正公式为:
    x cor=x+x(k 1r 2+k 2r 4+k 3r 6)+[2p 1y+p 2(r 2+2x 2)],          (1)
    其中(x,y)是图像中某一像素点的原始坐标;(x cor,y cor)是此像素点校正畸变后的坐标;
    Figure PCTCN2018113658-appb-100002
    [k 1,k 2,k 3]是径向畸变参数;[p 1,p 2]是切向畸变参数。
  10. 一种基于视觉识别的自动泊车方法,其特征在于:所述方法包括以下步骤:建图定位步骤和规划控制步骤;
    所述建图定位步骤利用设置在车辆的摄像头获取所述车辆四周的图像;从所述图像中识别出车位点、车位线和/或引导线,建立地图;
    所述规划控制步骤针对所述地图,采用Reeds-Shepp曲线生成平滑的路径;通过比例-积分-微分(PID)控制算法控制所述车辆完成对规划出的所述路径的跟踪,将所述车辆运动到泊车目标点;
    在所述建图定位步骤中,利用设置在所述车辆上超声波得到局部障碍信息;所述建图定位子模块从所述图像中识别出车位点、车位线和/或引导线,并结合所述局部障碍信息建立地图。
PCT/CN2018/113658 2018-09-17 2018-11-02 一种基于视觉识别的自动泊车系统及方法 WO2020056874A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811079125.2A CN109720340B (zh) 2018-09-17 2018-09-17 一种基于视觉识别的自动泊车系统及方法
CN201811079125.2 2018-09-17

Publications (1)

Publication Number Publication Date
WO2020056874A1 true WO2020056874A1 (zh) 2020-03-26

Family

ID=66295691

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/113658 WO2020056874A1 (zh) 2018-09-17 2018-11-02 一种基于视觉识别的自动泊车系统及方法

Country Status (2)

Country Link
CN (1) CN109720340B (zh)
WO (1) WO2020056874A1 (zh)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111599217A (zh) * 2020-06-04 2020-08-28 纵目科技(上海)股份有限公司 一种自主泊车系统架构、架构实现方法、终端和存储介质
CN111626348A (zh) * 2020-05-20 2020-09-04 安徽江淮汽车集团股份有限公司 自动泊车测试模型构建方法、设备、存储介质及装置
CN111640062A (zh) * 2020-05-15 2020-09-08 上海赫千电子科技有限公司 一种车载环视图像的自动拼接方法
CN111723659A (zh) * 2020-05-14 2020-09-29 上海欧菲智能车联科技有限公司 泊车位确定方法、装置、计算机设备和存储介质
CN111753639A (zh) * 2020-05-06 2020-10-09 上海欧菲智能车联科技有限公司 感知地图生成方法、装置、计算机设备和存储介质
CN111860228A (zh) * 2020-06-30 2020-10-30 北京百度网讯科技有限公司 用于自主泊车的方法、装置、设备以及存储介质
CN112180373A (zh) * 2020-09-18 2021-01-05 纵目科技(上海)股份有限公司 一种多传感器融合的智能泊车系统和方法
CN112880696A (zh) * 2021-01-13 2021-06-01 成都朴为科技有限公司 一种基于同时建图与定位的泊车系统及方法
CN112937554A (zh) * 2021-01-30 2021-06-11 惠州华阳通用电子有限公司 一种泊车方法及系统
CN113436275A (zh) * 2021-07-12 2021-09-24 超级视线科技有限公司 一种基于标定板的泊位尺寸确定方法及系统
CN113592949A (zh) * 2021-07-01 2021-11-02 广东工业大学 用于车辆无线泊车影像的控制系统及方法
CN113589685A (zh) * 2021-06-10 2021-11-02 常州工程职业技术学院 一种基于深度神经网络的挪车机器人控制系统及其方法
CN113609148A (zh) * 2021-08-17 2021-11-05 广州小鹏自动驾驶科技有限公司 一种地图更新的方法和装置
CN113753029A (zh) * 2021-08-27 2021-12-07 惠州华阳通用智慧车载系统开发有限公司 一种基于光流法的自动泊车方法及系统
CN113781300A (zh) * 2021-08-17 2021-12-10 东风汽车集团股份有限公司 一种用于远距离自主泊车的车辆视觉定位方法
CN113899377A (zh) * 2021-08-23 2022-01-07 武汉光庭信息技术股份有限公司 一种基于相机的自动泊车终点相对坐标的测量方法及系统
CN114030463A (zh) * 2021-11-23 2022-02-11 上海汽车集团股份有限公司 一种自动泊车系统的路径规划方法及装置
CN114179785A (zh) * 2021-11-22 2022-03-15 岚图汽车科技有限公司 一种基于面向服务的融合泊车控制系统、电子设备和车辆
CN114241437A (zh) * 2021-11-19 2022-03-25 岚图汽车科技有限公司 一种特定区域泊车系统、控制方法及其设备
CN114454872A (zh) * 2020-11-10 2022-05-10 上汽通用汽车有限公司 泊车系统和泊车方法
CN114926820A (zh) * 2022-06-09 2022-08-19 东风汽车集团股份有限公司 基于深度学习和图像帧优化的斜车位识别方法及系统
CN115235452A (zh) * 2022-07-22 2022-10-25 上海师范大学 基于uwb/imu和视觉信息融合的智能泊车定位系统及方法
CN115903837A (zh) * 2022-12-19 2023-04-04 湖州丽天智能科技有限公司 一种车载光伏机器人自动充电方法和系统
CN116229426A (zh) * 2023-05-09 2023-06-06 华东交通大学 基于全景环视图像的无人驾驶泊车停车位检测方法
CN116772744A (zh) * 2023-08-24 2023-09-19 成都量芯集成科技有限公司 一种基于激光测距和视觉融合的3d扫描装置及其方法
WO2024038687A1 (en) * 2022-08-19 2024-02-22 Mitsubishi Electric Corporation System and method for controlling movement of a vehicle
CN118097623A (zh) * 2024-04-22 2024-05-28 纽劢科技(上海)有限公司 基于深度学习的自动泊车障碍物接地线的检测方法、系统

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110293966B (zh) * 2019-06-28 2021-06-01 北京地平线机器人技术研发有限公司 车辆泊车控制方法,车辆泊车控制装置和电子设备
CN110751850B (zh) * 2019-08-30 2023-03-07 的卢技术有限公司 一种基于深度神经网络的车位识别方法和系统
CN110705359B (zh) * 2019-09-05 2023-03-03 北京智行者科技股份有限公司 一种车位检测方法
CN110606071A (zh) * 2019-09-06 2019-12-24 中国第一汽车股份有限公司 一种泊车方法、装置、车辆和存储介质
CN110562248B (zh) * 2019-09-17 2020-09-25 浙江吉利汽车研究院有限公司 一种基于无人机的自动泊车系统及自动泊车方法
CN111176288A (zh) * 2020-01-07 2020-05-19 深圳南方德尔汽车电子有限公司 基于Reedsshepp全局路径规划方法、装置、计算机设备及存储介质
CN111274343B (zh) * 2020-01-20 2023-11-24 阿波罗智能技术(北京)有限公司 一种车辆定位方法、装置、电子设备及存储介质
CN111291650B (zh) * 2020-01-21 2023-06-20 北京百度网讯科技有限公司 自动泊车辅助的方法及装置
WO2021226772A1 (zh) * 2020-05-11 2021-11-18 上海欧菲智能车联科技有限公司 环视图显示方法、装置、计算机设备和存储介质
CN111678518B (zh) * 2020-05-29 2023-07-28 南京市德赛西威汽车电子有限公司 一种用于修正自动泊车路径的视觉定位方法
CN112644479B (zh) * 2021-01-07 2022-05-13 广州小鹏自动驾驶科技有限公司 一种泊车控制方法和装置
CN112660117B (zh) * 2021-01-19 2022-12-13 广州小鹏自动驾驶科技有限公司 一种自动泊车方法、泊车系统、计算机设备及存储介质
CN114274948A (zh) * 2021-12-15 2022-04-05 武汉光庭信息技术股份有限公司 一种基于360度全景的自动泊车方法及装置
CN114312759A (zh) * 2022-01-21 2022-04-12 山东浪潮科学研究院有限公司 一种智能辅助停车的方法、设备及存储介质
CN118082811B (zh) * 2024-04-23 2024-08-09 知行汽车科技(苏州)股份有限公司 一种泊车控制方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102963355A (zh) * 2012-11-01 2013-03-13 同济大学 一种智能辅助泊车方法及其实现系统
CN103600707A (zh) * 2013-11-06 2014-02-26 同济大学 一种智能泊车系统的泊车位检测装置及方法
CN106114623A (zh) * 2016-06-16 2016-11-16 江苏大学 一种基于人类视觉的自动泊车路径规划方法及系统
WO2017003052A1 (ko) * 2015-06-29 2017-01-05 엘지전자 주식회사 차량 운전 보조 방법 및 차량

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6319213B2 (ja) * 2015-07-10 2018-05-09 トヨタ自動車株式会社 ハイブリッド車両の制御装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102963355A (zh) * 2012-11-01 2013-03-13 同济大学 一种智能辅助泊车方法及其实现系统
CN103600707A (zh) * 2013-11-06 2014-02-26 同济大学 一种智能泊车系统的泊车位检测装置及方法
WO2017003052A1 (ko) * 2015-06-29 2017-01-05 엘지전자 주식회사 차량 운전 보조 방법 및 차량
CN106114623A (zh) * 2016-06-16 2016-11-16 江苏大学 一种基于人类视觉的自动泊车路径规划方法及系统

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753639B (zh) * 2020-05-06 2024-08-16 上海欧菲智能车联科技有限公司 感知地图生成方法、装置、计算机设备和存储介质
CN111753639A (zh) * 2020-05-06 2020-10-09 上海欧菲智能车联科技有限公司 感知地图生成方法、装置、计算机设备和存储介质
CN111723659A (zh) * 2020-05-14 2020-09-29 上海欧菲智能车联科技有限公司 泊车位确定方法、装置、计算机设备和存储介质
CN111723659B (zh) * 2020-05-14 2024-01-09 上海欧菲智能车联科技有限公司 泊车位确定方法、装置、计算机设备和存储介质
CN111640062B (zh) * 2020-05-15 2023-06-09 上海赫千电子科技有限公司 一种车载环视图像的自动拼接方法
CN111640062A (zh) * 2020-05-15 2020-09-08 上海赫千电子科技有限公司 一种车载环视图像的自动拼接方法
CN111626348B (zh) * 2020-05-20 2024-02-02 安徽江淮汽车集团股份有限公司 自动泊车测试模型构建方法、设备、存储介质及装置
CN111626348A (zh) * 2020-05-20 2020-09-04 安徽江淮汽车集团股份有限公司 自动泊车测试模型构建方法、设备、存储介质及装置
CN111599217A (zh) * 2020-06-04 2020-08-28 纵目科技(上海)股份有限公司 一种自主泊车系统架构、架构实现方法、终端和存储介质
CN111599217B (zh) * 2020-06-04 2023-06-13 纵目科技(上海)股份有限公司 一种自主泊车系统架构、架构实现方法、终端和存储介质
CN111860228B (zh) * 2020-06-30 2024-01-16 阿波罗智能技术(北京)有限公司 用于自主泊车的方法、装置、设备以及存储介质
CN111860228A (zh) * 2020-06-30 2020-10-30 北京百度网讯科技有限公司 用于自主泊车的方法、装置、设备以及存储介质
CN112180373A (zh) * 2020-09-18 2021-01-05 纵目科技(上海)股份有限公司 一种多传感器融合的智能泊车系统和方法
CN112180373B (zh) * 2020-09-18 2024-04-19 纵目科技(上海)股份有限公司 一种多传感器融合的智能泊车系统和方法
CN114454872A (zh) * 2020-11-10 2022-05-10 上汽通用汽车有限公司 泊车系统和泊车方法
CN112880696A (zh) * 2021-01-13 2021-06-01 成都朴为科技有限公司 一种基于同时建图与定位的泊车系统及方法
CN112937554A (zh) * 2021-01-30 2021-06-11 惠州华阳通用电子有限公司 一种泊车方法及系统
CN113589685A (zh) * 2021-06-10 2021-11-02 常州工程职业技术学院 一种基于深度神经网络的挪车机器人控制系统及其方法
CN113589685B (zh) * 2021-06-10 2024-04-09 常州工程职业技术学院 一种基于深度神经网络的挪车机器人控制系统及其方法
CN113592949B (zh) * 2021-07-01 2024-03-29 广东工业大学 用于车辆无线泊车影像的控制系统及方法
CN113592949A (zh) * 2021-07-01 2021-11-02 广东工业大学 用于车辆无线泊车影像的控制系统及方法
CN113436275A (zh) * 2021-07-12 2021-09-24 超级视线科技有限公司 一种基于标定板的泊位尺寸确定方法及系统
CN113781300B (zh) * 2021-08-17 2023-10-13 东风汽车集团股份有限公司 一种用于远距离自主泊车的车辆视觉定位方法
CN113609148A (zh) * 2021-08-17 2021-11-05 广州小鹏自动驾驶科技有限公司 一种地图更新的方法和装置
CN113781300A (zh) * 2021-08-17 2021-12-10 东风汽车集团股份有限公司 一种用于远距离自主泊车的车辆视觉定位方法
CN113899377A (zh) * 2021-08-23 2022-01-07 武汉光庭信息技术股份有限公司 一种基于相机的自动泊车终点相对坐标的测量方法及系统
CN113899377B (zh) * 2021-08-23 2023-10-27 武汉光庭信息技术股份有限公司 一种基于相机的自动泊车终点相对坐标的测量方法及系统
CN113753029B (zh) * 2021-08-27 2023-11-17 惠州华阳通用智慧车载系统开发有限公司 一种基于光流法的自动泊车方法及系统
CN113753029A (zh) * 2021-08-27 2021-12-07 惠州华阳通用智慧车载系统开发有限公司 一种基于光流法的自动泊车方法及系统
CN114241437A (zh) * 2021-11-19 2022-03-25 岚图汽车科技有限公司 一种特定区域泊车系统、控制方法及其设备
CN114179785B (zh) * 2021-11-22 2023-10-13 岚图汽车科技有限公司 一种基于面向服务的融合泊车控制系统、电子设备和车辆
CN114179785A (zh) * 2021-11-22 2022-03-15 岚图汽车科技有限公司 一种基于面向服务的融合泊车控制系统、电子设备和车辆
CN114030463B (zh) * 2021-11-23 2024-05-14 上海汽车集团股份有限公司 一种自动泊车系统的路径规划方法及装置
CN114030463A (zh) * 2021-11-23 2022-02-11 上海汽车集团股份有限公司 一种自动泊车系统的路径规划方法及装置
CN114926820A (zh) * 2022-06-09 2022-08-19 东风汽车集团股份有限公司 基于深度学习和图像帧优化的斜车位识别方法及系统
CN114926820B (zh) * 2022-06-09 2024-07-12 东风汽车集团股份有限公司 基于深度学习和图像帧优化的斜车位识别方法及系统
CN115235452A (zh) * 2022-07-22 2022-10-25 上海师范大学 基于uwb/imu和视觉信息融合的智能泊车定位系统及方法
WO2024038687A1 (en) * 2022-08-19 2024-02-22 Mitsubishi Electric Corporation System and method for controlling movement of a vehicle
CN115903837B (zh) * 2022-12-19 2023-09-29 湖州丽天智能科技有限公司 一种车载光伏机器人自动充电方法和系统
CN115903837A (zh) * 2022-12-19 2023-04-04 湖州丽天智能科技有限公司 一种车载光伏机器人自动充电方法和系统
CN116229426A (zh) * 2023-05-09 2023-06-06 华东交通大学 基于全景环视图像的无人驾驶泊车停车位检测方法
CN116772744B (zh) * 2023-08-24 2023-10-24 成都量芯集成科技有限公司 一种基于激光测距和视觉融合的3d扫描装置及其方法
CN116772744A (zh) * 2023-08-24 2023-09-19 成都量芯集成科技有限公司 一种基于激光测距和视觉融合的3d扫描装置及其方法
CN118097623A (zh) * 2024-04-22 2024-05-28 纽劢科技(上海)有限公司 基于深度学习的自动泊车障碍物接地线的检测方法、系统

Also Published As

Publication number Publication date
CN109720340B (zh) 2021-05-04
CN109720340A (zh) 2019-05-07

Similar Documents

Publication Publication Date Title
WO2020056874A1 (zh) 一种基于视觉识别的自动泊车系统及方法
Qin et al. Avp-slam: Semantic visual mapping and localization for autonomous vehicles in the parking lot
Cai et al. Vision-based trajectory planning via imitation learning for autonomous vehicles
CN109733384A (zh) 泊车路径设置方法及系统
CN111037552B (zh) 一种配电房轮式巡检机器人的巡检配置及实施方法
CN107600067A (zh) 一种基于多视觉惯导融合的自主泊车系统及方法
WO2015024407A1 (zh) 基于电力机器人的双目视觉导航系统及方法
CN106272423A (zh) 一种针对大尺度环境的多机器人协同制图与定位的方法
CN112102369A (zh) 水面漂浮目标自主巡检方法、装置、设备及存储介质
Tripathi et al. Trained trajectory based automated parking system using visual SLAM on surround view cameras
Bista et al. Appearance-based indoor navigation by IBVS using line segments
CN110163963B (zh) 一种基于slam的建图装置和建图方法
AU2012323096A1 (en) Method of calibrating a computer-based vision system onboard a craft
Alizadeh Object distance measurement using a single camera for robotic applications
CN111612823A (zh) 一种基于视觉的机器人自主跟踪方法
CN106529466A (zh) 一种基于仿生眼的无人驾驶车辆路径规划方法及系统
CN110262487B (zh) 一种障碍物检测方法、终端及计算机可读存储介质
CN112344923A (zh) 一种机器人的定位方法及其定位装置
CN111161334A (zh) 一种基于深度学习的语义地图构建方法
CN111397609A (zh) 路径规划方法、移动式机器及计算机可读介质
CN107437071B (zh) 一种基于双黄线检测的机器人自主巡检方法
CN114379544A (zh) 一种基于多传感器前融合的自动泊车系统、方法及装置
CN111757021B (zh) 面向移动机器人远程接管场景的多传感器实时融合方法
CN117570960A (zh) 一种用于导盲机器人的室内定位导航系统及方法
CN111380535A (zh) 基于视觉标签的导航方法、装置、移动式机器及可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18934336

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18934336

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18934336

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/11/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18934336

Country of ref document: EP

Kind code of ref document: A1