Nothing Special   »   [go: up one dir, main page]

CN115346184A - Lane information detection method, terminal and computer storage medium - Google Patents

Lane information detection method, terminal and computer storage medium Download PDF

Info

Publication number
CN115346184A
CN115346184A CN202210988252.4A CN202210988252A CN115346184A CN 115346184 A CN115346184 A CN 115346184A CN 202210988252 A CN202210988252 A CN 202210988252A CN 115346184 A CN115346184 A CN 115346184A
Authority
CN
China
Prior art keywords
lane information
feature
image
lane
information detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210988252.4A
Other languages
Chinese (zh)
Inventor
张军良
赵天坤
陈远鹏
冷静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hozon New Energy Automobile Co Ltd
Original Assignee
Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hozon New Energy Automobile Co Ltd filed Critical Hozon New Energy Automobile Co Ltd
Priority to CN202210988252.4A priority Critical patent/CN115346184A/en
Publication of CN115346184A publication Critical patent/CN115346184A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present application relates to a lane information detection method, a terminal and a computer storage medium, wherein the lane information detection method includes: acquiring a first environment image around a vehicle, and extracting the characteristics of the first environment image; converting the feature of the first environment image into a first bird's-eye view image feature; determining lane information by combining the trained lane information detection model according to the first aerial view image characteristic; the lane information includes lane lines and lane marks. According to the lane information detection method, the lane information detection terminal and the computer storage medium, the characteristics of the 2D environment images collected by the multiple cameras are extracted through the neural network, the characteristics of the 2D environment images are converted into the characteristics of the 3D aerial view images, the 3D aerial view image characteristics are input into the trained lane information detection model, the 3D lane information semantic elements are obtained, the lane information detection process is simplified, and the lane information detection efficiency, accuracy, continuity, expandability and intelligent level are improved.

Description

Lane information detection method, terminal and computer storage medium
Technical Field
The present application belongs to the technical field of lane information detection, and in particular, to a lane information detection method, a terminal and a computer storage medium.
Background
The current lane line detection technology is based on two-dimensional (2D) lane line detection of a single-camera image plane, and further based on a ground plane hypothesis, changes the lane line into a three-dimensional (3D) world coordinate system in an anti-perspective manner. The technology is based on the assumption of a ground plane, however, road fluctuation frequently occurs in a real scene, which causes external parameter fluctuation of a vehicle-mounted camera and the ground plane, so that external parameter transformation is usually solved in real time by using the parallel assumption of lane lines, but the method introduces the lane line parallel assumption and depends on a two-dimensional (2D) lane line detection result, as is known, the real road scene has complex lane lines, the lane lines do not satisfy the parallel assumption at a road intersection, the lane line is absent on a two-dimensional (2D) lane line detection task, and the lane line detection precision problem is caused by insufficient algorithm precision, so other errors are introduced by the method based on real-time external parameter estimation, and the lane line drawing problem caused by the external parameter transformation cannot be solved.
The existing 2D lane line detection method based on a single-camera image layer has the following problems:
1) Firstly, single-camera 2D lane line detection is carried out, the 2D lane line is post-processed back to a 3D lane line, and then the 3D lane line post-processing is associated and fused, so that the lane line detection process is complex;
2) Based on the ground plane hypothesis and the lane line parallel hypothesis, the lane line detection algorithm has insufficient precision, the camera external parameters are estimated in real time, other errors are introduced, and the lane line detection accuracy is reduced;
3) There is a problem of discontinuity of lane lines across the cameras;
4) The model and the installation position of each camera cannot be adapted to the camera by a neural network generally, so that a look-around detection model is large, and meanwhile, the external parameters of the multiple cameras are calibrated in real time and then are associated and fused, so that the computing power of a CPU (Central processing Unit) is a very serious challenge;
5) After the number of the cameras is expanded, the neural network needs to be customized for the cameras at different positions, and meanwhile, the neural network also needs to be pertinently changed and optimized after the post-processing association fusion;
6) Pixels outside the pixel coordinates of the image cannot be marked, so that the detection range of the lane line is limited, the lane line outside the pixel coordinates of the image cannot be compensated by a computer, and the intelligence is not enough.
Disclosure of Invention
In view of the above technical problems, the present application provides a lane information detection method, a terminal and a computer storage medium, so as to simplify the lane information detection process and improve the efficiency, accuracy, continuity, expandability and intelligent level of lane information detection.
The application provides a lane information detection method, which comprises the following steps: acquiring a first environment image around a vehicle, and extracting features of the first environment image; converting the feature of the first environment image into a first bird's-eye view image feature; determining lane information by combining the trained lane information detection model according to the first aerial view image characteristic; the lane information comprises lane lines and lane marks.
In one embodiment, the step of acquiring a first environment image around a vehicle and extracting features of the first environment image includes: according to the spatial position relation of the vehicle-mounted cameras, sequentially inputting first environment images acquired by the cameras into a first neural network; extracting, by the first neural network, a first feature of the first environmental image.
In one embodiment, the step of acquiring a first environment image around a vehicle and extracting features of the first environment image includes: inputting a first feature of the first environmental image to a second neural network; extracting, by the second neural network, a second feature of the first environmental image.
In one embodiment, the step of converting the feature of the first environment image into the feature of the first bird's eye view image includes: acquiring camera parameters of each camera; determining a first aerial view image characteristic corresponding to the characteristic of the first environment image through a third neural network according to the characteristic of the first environment image and the camera parameter; wherein the feature of the first environmental image comprises a first feature of the first environmental image or a second feature of the first environmental image.
In one embodiment, before determining lane information in combination with a trained lane information detection model according to the first bird's-eye view image feature, the method includes: constructing a lane information detection model and a loss function; acquiring a second aerial view image characteristic and a lane information true value; inputting the second aerial view image feature as a training sample to the lane information detection model; and taking the lane information true value as an expected output value, and training the lane information detection model by combining the loss function.
In one embodiment, acquiring the second bird's-eye view image feature includes: controlling the vehicle to run at a preset vehicle speed, and acquiring a second environment image around the vehicle through a vehicle-mounted camera; extracting the features of the second environment image, and converting the features of the second environment image into the second bird's-eye view image features; the imaging time difference of any two vehicle-mounted cameras meets a preset difference range.
In one embodiment, acquiring lane information truth values includes: and generating a lane information true value corresponding to a value range by combining the high-precision map data and/or the laser radar data with the visual field range of the vehicle-mounted camera.
In one embodiment, the loss function is:
FL(P t )=-α t (1-P t ) γ log(P t )
wherein, P t Probability, FL (P), for the training sample to belong to the t class t ) Probabilistic correspondences for said training samples belonging to the t classLoss value of, alpha t Balance factor, γ, focus parameter, and t, category.
The application also provides a terminal, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the detection method when executing the computer program.
The present application further provides a computer storage medium, which stores a computer program that, when executed by a processor, implements the steps of the above-described detection method.
The lane information detection method, the terminal and the computer storage medium have the following beneficial effects:
1) 2D environment images acquired by multiple cameras are input into a first neural network, 2D environment image features are extracted, the 2D environment image features are converted into 3D bird's-eye view image features through a second neural network, the 3D bird's-eye view image features are input into a trained lane information detection model, and look-around 3D lane information semantic elements are obtained, so that a lane information detection process is simplified, the lane information detection model can be quickly deployed on different hardware platforms, and dependence on a third software package is very little;
2) The lane information is detected by a lane information detection model obtained by training a 3D lane information truth value generated based on high-precision map data and/or laser radar data, the lane information is not detected based on a ground plane hypothesis and a lane line parallel hypothesis any more, and camera external parameters are not required to be estimated in real time, so that the accuracy of lane information detection is improved;
3) The environmental images acquired by the multiple cameras are input into the same neural network, so that the problem of discontinuous lane lines crossing the cameras can be effectively solved, and the continuity of lane information detection is improved;
4) The same neural network is directly used for looking around detection aiming at the environment images acquired by multiple cameras, so that the efficiency of lane information detection is improved;
5) Aiming at the condition that the number of multi-camera inputs is increased, only the first layer of the neural network input needs to be expanded, the whole neural network does not need to be changed, and the expandability of lane information detection is improved.
6) The lane information detection model is trained on the basis of a 3D lane information truth value generated by high-precision map data and/or laser radar data, so that the lane information detection model has a brain supplement function for lane information detection, the lane information outside the single-camera visual field also has recognition capability, and the intelligent level of lane information detection is improved.
Drawings
Fig. 1 is a schematic flowchart of a lane information detection method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a terminal according to a second embodiment of the present application.
Detailed Description
The technical solution of the present application is further described in detail with reference to the drawings and specific embodiments. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 1 is a schematic flowchart of a lane information detecting method according to an embodiment of the present disclosure. As shown in fig. 1, the lane information detecting method of the present application may include the steps of:
step S101: acquiring a first environment image around a vehicle, and extracting the characteristics of the first environment image;
optionally, acquiring first environment images of different directions around the vehicle through a plurality of vehicle-mounted cameras; the feature of the first environment image is a difference feature of some regions and other regions in the first environment image, and includes a color feature, a shape feature, a spatial feature, a texture feature, a depth feature and the like.
In one embodiment, step S101 includes:
according to the spatial position relation of the vehicle-mounted cameras, sequentially inputting first environment images acquired by the cameras into a first neural network;
and extracting a first feature of the first environment image through the first neural network.
Optionally, the spatial position relationship of the vehicle-mounted camera includes that the vehicle-mounted camera is located right in front of the vehicle, left in front, left in back, right in front, and the like; the vehicle-mounted cameras have high frequency, and the imaging time of each camera is within 100 ms.
Optionally, sequentially inputting a first environment image collected by a left front camera, a first environment image collected by a left rear camera, a first environment image collected by a right front camera, and a first environment image collected by a right front camera to the first neural network;
optionally, the first neural Network includes a Residual Network (ResNet), a Visual Geometry laboratory Network (VGG), and other feature extraction networks.
In one embodiment, step S101 includes:
inputting a first feature of the first environment image to a second neural network;
and extracting a second feature of the first environment image through a second neural network.
Optionally, the second neural network is a Feature Pyramid Network (FPN); after the first features of the first environment image are subjected to multi-scale fusion through the feature pyramid network, the second features of the first environment image are output, so that the multi-scale problem in image detection is solved, for example, lane lines with different distances have longer physical distances at far points with the same length, and the distance is similar to the distance of an obstacle on the image. Optionally, the number of layers of the FPN is adjusted according to the requirement of lane line detection precision.
Step S102: converting the feature of the first environment image into a first bird's-eye view image feature;
in one embodiment, step S102 includes:
acquiring camera parameters of each camera;
determining a first aerial view image characteristic corresponding to the characteristic of the first environment image through a third neural network according to the characteristic of the first environment image and the camera parameter;
wherein the feature of the first environment image comprises a first feature of the first environment image or a second feature of the first environment image.
Optionally, the internal and external parameters of different cameras are a strong prior, and the second features of the first environment images acquired by different cameras are converted into the first bird's-eye view image features through the internal and external parameters of the corresponding cameras by a third neural network; the internal and external parameters of the camera are calibration data of the internal and external parameters of the vehicle-mounted camera in a static state of the vehicle; optionally, the third neural network is a Transform neural network.
Step S103: determining lane information by combining the trained lane information detection model according to the first aerial view image characteristic; the lane information includes lane lines and lane marks.
Optionally, the detected lane lines and lane marks are stored in a structured information form, and/or the detected lane lines and lane marks are identified in the first environment image and presented through an interactive interface.
In one embodiment, before step S103, the method includes:
constructing a lane information detection model and a loss function;
acquiring a second aerial view image characteristic and a lane information true value;
inputting the second aerial view image characteristics serving as training samples into the lane information detection model;
and taking the lane information true value as an expected output value, and training a lane information detection model by combining a loss function.
Optionally, the lane information detection model is a linear regression model constructed based on a gradient descent method.
In one embodiment, the lane information detection model comprises a deconvolution neural network, and the environmental image reduced in size by downsampling through convolution neural networks such as Resnet and FPN is restored to the original size through upsampling by the deconvolution neural network; optionally, the step size of the deconvolution is adjusted according to the requirement of lane line detection accuracy.
In another embodiment, the lane information detection model does not include a deconvolution neural network, and the environmental image can be restored to the original size by directly upsampling the environmental image reduced in size by downsampling the environmental image by the convolution neural network such as Resnet or FPN.
Optionally, the loss function is:
FL(P t )=-α t (1-P t ) γ log(P t )
wherein, P t Probability of belonging to t class, FL (P), for a training sample t ) Loss value, alpha, corresponding to the probability that a training sample belongs to the t class t Balance factor, γ, focus parameter, and t, category.
When gamma =0, the loss function is the conventional cross-entropy loss function, and when gamma increases, the modulation factor (1-P) t ) γ Will also increase; the focusing parameter gamma smoothly adjusts the proportion of the easy-to-separate samples to reduce the weight, and the influence of modulation factors can be enhanced by increasing the gamma; the experiment shows that the gamma is 2 and the alpha is t The optimum is 0.5.
In one embodiment, acquiring a second bird's-eye view image feature includes:
controlling a vehicle to run at a preset speed, and acquiring a second environment image around the vehicle through a vehicle-mounted camera;
extracting the features of the second environment image, and converting the features of the second environment image into the features of the second bird's-eye view image;
the imaging time difference of any two vehicle-mounted cameras meets a preset difference range.
Optionally, controlling the vehicle to run at a preset vehicle speed, and acquiring second environment images of different directions around the vehicle through a plurality of vehicle-mounted cameras; extracting features of the second environment image through a first neural network, such as a Resnet network; converting the second environment image into a second bird's-eye view image characteristic through a second neural network, such as an FPN network; optionally, the preset vehicle speed is 60km/h, and the preset difference range is within 100 ms.
It is worth mentioning that after the training of the lane information detection model is completed, in the actual lane information detection process, the speed of the vehicle is not limited, the imaging difference values of the vehicle-mounted cameras for general automatic driving are all within 100ms, and experiments prove that the trained lane information detection model has good applicability.
Because the internal and external parameters of different cameras are strong prior, if the lane information detection model is directly learned without feature conversion, the surprise finding is that a better lane information detection result can be obtained aiming at the same collection data (like an environmental image collected by a camera of a vehicle), but the detection result is greatly discounted aiming at the internal and external parameters of different vehicles; the lane information detection model can learn internal and external parameters of the camera in the same batch of collected data, the internal and external parameters are easy to over-fit or poor in robustness, and in order to reduce the learning load of the lane information detection model, the environmental image features are converted into the bird's-eye view angle through the internal and external parameters, so that for different vehicle types, the internal and external parameters of the camera are only required to be provided for a feature conversion neural network.
In one embodiment, acquiring lane information truth values includes:
and generating a lane information truth value corresponding to the value range by combining the high-precision map data and/or the laser radar data with the view range of the vehicle-mounted camera.
It is worth mentioning that the value range of the lane information truth value is formulated according to the visual field range of each camera, which is beneficial to the convergence of the training of the lane information detection model.
It is worth mentioning that, since the lane line is a static element, the position of the lane line is fixed under the condition that the road is not changed, so that the 3D truth value of the lane line can be perfectly obtained, and with the development of the current high-precision map, the lane line with an ultra-long distance can be obtained even if the lane line is not captured in the image pixel coordinate system. Similarly, unlike the common lane line detection, under the condition of obtaining a 3D lane line true value, a plurality of assumptions of lane line detection can be abandoned, and various lane lines can be learned through an end-to-end detection model, so that the lane information detection method is not based on the lane line parallel assumption any more, and the process of 2D-back 3D of the lane lines is not required, thereby greatly improving the simplicity and comprehensiveness of the algorithm and being more in line with the human brain-like perception link; similarly, the detection capability of the detection model can be rapidly expanded only by performing data expansion on the lane line truth value, and when the environmental images input by the multiple cameras can be used for detecting the all-round lane line, and/or the environmental images input by the partial cameras reach the detection condition of the lane line with partial visual field, the value range of the lane line truth value is enlarged, and the detection model can obtain the detection capability of brain complement. Meanwhile, when the visual fields of the multiple cameras are overlapped, the lane information detection method can solve the problem of wrong splicing of the traditional post-processing method, and the comprehensive problem caused by multiple factors is handed to a detection model to be learned, so that data-driven algorithm iteration is realized.
Optionally, the first environment image is an environment image around the vehicle obtained when lane information detection is actually performed after the training of the lane information detection model is completed; the second environment image is an environment image around the vehicle obtained when the lane information detection model is trained, wherein the first environment image and the second environment image can be the same environment image or different environment images.
Optionally, the first bird's-eye view image feature is an image feature of the first bird's-eye view angle obtained by converting the feature of the first environment image after the training of the lane information detection model is completed; and the second aerial view image characteristic is an image characteristic of a second aerial view visual angle obtained by converting the characteristics of the second environment image when the lane information detection model is trained, wherein the first aerial view visual angle and the second aerial view visual angle can be the same aerial view visual angle or different aerial view visual angles.
According to the lane information detection method, the characteristics of the 2D environment images acquired by the multiple cameras are extracted through the neural network, the characteristics of the 2D environment images are converted into the 3D aerial view image characteristics, the 3D aerial view image characteristics are input into the trained lane information detection model, the 3D lane information semantic elements are obtained, the lane information detection process is simplified, and the lane information detection efficiency, accuracy, continuity, expandability and intelligent level are improved.
Fig. 2 is a schematic structural diagram of a terminal provided in this application. The terminal of the application includes: a processor 110, a memory 111, and a computer program 112 stored in the memory 111 and operable on the processor 110. The steps in the above-described lane information detection method embodiments are implemented when the processor 110 executes the computer program 112.
The terminal may include, but is not limited to, a processor 110, a memory 111. Those skilled in the art will appreciate that fig. 2 is only an example of a terminal and is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or different components, e.g., the terminal may also include input-output devices, network access devices, buses, etc.
The Processor 110 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 111 may be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 111 may also be an external storage device of the terminal, such as a plug-in hard disk provided on the terminal, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 111 may also include both an internal storage unit of the terminal and an external storage device. The memory 111 is used for storing computer programs and other programs and data required by the terminal. The memory 111 may also be used to temporarily store data that has been output or is to be output.
The present application further provides a computer storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the above-described lane information detection method embodiment.
All possible combinations of the technical features of the above embodiments may not be described for the sake of brevity, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A lane information detecting method, characterized by comprising:
acquiring a first environment image around a vehicle, and extracting features of the first environment image;
converting the feature of the first environment image into a first bird's-eye view image feature;
determining lane information by combining the trained lane information detection model according to the first aerial view image characteristic; the lane information comprises lane lines and lane marks.
2. The detection method according to claim 1, wherein the step of acquiring a first environment image around a vehicle and extracting a feature of the first environment image includes:
according to the spatial position relation of the vehicle-mounted cameras, sequentially inputting the first environment images acquired by the cameras into a first neural network;
extracting, by the first neural network, a first feature of the first environment image.
3. The detection method according to claim 2, wherein the step of acquiring a first environment image around the vehicle and extracting a feature of the first environment image includes:
inputting a first feature of the first environmental image to a second neural network;
extracting, by the second neural network, a second feature of the first environmental image.
4. The detection method according to claim 2 or 3, wherein the step of converting the feature of the first environment image into the first bird's-eye view image feature comprises:
acquiring camera parameters of each camera;
determining a first aerial view image characteristic corresponding to the characteristic of the first environment image through a third neural network according to the characteristic of the first environment image and the camera parameter;
wherein the feature of the first environmental image comprises a first feature of the first environmental image or a second feature of the first environmental image.
5. The detection method according to claim 1, wherein before determining the lane information based on the first bird's eye-view image feature in combination with the trained lane information detection model, the method comprises:
constructing a lane information detection model and a loss function;
acquiring a second aerial view image characteristic and a lane information true value;
inputting the second aerial view image feature as a training sample to the lane information detection model;
and taking the lane information true value as an expected output value, and training the lane information detection model by combining the loss function.
6. The detection method according to claim 5, wherein acquiring the second bird's-eye view image feature includes:
controlling the vehicle to run at a preset vehicle speed, and acquiring a second environment image around the vehicle through a vehicle-mounted camera;
extracting the features of the second environment image, and converting the features of the second environment image into the second bird's-eye view image features;
the imaging time difference of any two vehicle-mounted cameras meets a preset difference range.
7. The detection method of claim 6, wherein obtaining a true value of lane information comprises:
and generating a lane information true value corresponding to a value range by combining the high-precision map data and/or the laser radar data with the visual field range of the vehicle-mounted camera.
8. The detection method of claim 5, wherein the loss function is:
FL(P t )=-α t (1-P t ) γ log(P y )
wherein, P t Probability, FL (P), for the training sample to belong to the t class t ) A loss value, alpha, corresponding to the probability that the training sample belongs to the t class t Balance factor, γ, focus parameter, and t, category.
9. A terminal, characterized in that the terminal comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the detection method according to any one of claims 1 to 8 when executing the computer program.
10. A computer storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the detection method according to any one of claims 1 to 8.
CN202210988252.4A 2022-08-17 2022-08-17 Lane information detection method, terminal and computer storage medium Pending CN115346184A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210988252.4A CN115346184A (en) 2022-08-17 2022-08-17 Lane information detection method, terminal and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210988252.4A CN115346184A (en) 2022-08-17 2022-08-17 Lane information detection method, terminal and computer storage medium

Publications (1)

Publication Number Publication Date
CN115346184A true CN115346184A (en) 2022-11-15

Family

ID=83951154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210988252.4A Pending CN115346184A (en) 2022-08-17 2022-08-17 Lane information detection method, terminal and computer storage medium

Country Status (1)

Country Link
CN (1) CN115346184A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116052123A (en) * 2023-01-28 2023-05-02 广汽埃安新能源汽车股份有限公司 Parking space detection method, device, vehicle and equipment based on camera picture

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116052123A (en) * 2023-01-28 2023-05-02 广汽埃安新能源汽车股份有限公司 Parking space detection method, device, vehicle and equipment based on camera picture

Similar Documents

Publication Publication Date Title
CN108960211B (en) Multi-target human body posture detection method and system
CN111222395A (en) Target detection method and device and electronic equipment
CN114022830A (en) Target determination method and target determination device
CN111144315A (en) Target detection method and device, electronic equipment and readable storage medium
CN113624223B (en) Indoor parking lot map construction method and device
CN115240168A (en) Perception result obtaining method and device, computer equipment and storage medium
CN113160272B (en) Target tracking method and device, electronic equipment and storage medium
CN115147333A (en) Target detection method and device
CN113744280B (en) Image processing method, device, equipment and medium
CN115346184A (en) Lane information detection method, terminal and computer storage medium
CN112529011B (en) Target detection method and related device
CN114419490A (en) SAR ship target detection method based on attention pyramid
CN112529917A (en) Three-dimensional target segmentation method, device, equipment and storage medium
CN111144361A (en) Road lane detection method based on binaryzation CGAN network
CN111656404A (en) Image processing method and system and movable platform
CN116363628A (en) Mark detection method and device, nonvolatile storage medium and computer equipment
CN115223146A (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
CN116343158B (en) Training method, device, equipment and storage medium of lane line detection model
CN115830588B (en) Target detection method, system, storage medium and device based on point cloud
CN115049895B (en) Image attribute identification method, attribute identification model training method and device
Berrio et al. Semantic sensor fusion: From camera to sparse LiDAR information
CN115829898B (en) Data processing method, device, electronic equipment, medium and automatic driving vehicle
CN115063594B (en) Feature extraction method and device based on automatic driving
US20240202533A1 (en) Generating artificial video with changed domain
CN115661577B (en) Method, apparatus and computer readable storage medium for object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant after: United New Energy Automobile Co.,Ltd.

Address before: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant before: Hozon New Energy Automobile Co., Ltd.