Nothing Special   »   [go: up one dir, main page]

CN114445797A - Night driving vision auxiliary method and related equipment - Google Patents

Night driving vision auxiliary method and related equipment Download PDF

Info

Publication number
CN114445797A
CN114445797A CN202111646632.1A CN202111646632A CN114445797A CN 114445797 A CN114445797 A CN 114445797A CN 202111646632 A CN202111646632 A CN 202111646632A CN 114445797 A CN114445797 A CN 114445797A
Authority
CN
China
Prior art keywords
image
vehicle
light intensity
segmentation
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111646632.1A
Other languages
Chinese (zh)
Inventor
张宁
岑显达
彭佳彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN202111646632.1A priority Critical patent/CN114445797A/en
Publication of CN114445797A publication Critical patent/CN114445797A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0118Head-up displays characterised by optical features comprising devices for improving the contrast of the display / brillance control visibility
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Optics & Photonics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention provides a night driving vision auxiliary method, which comprises the following steps: when the vehicle runs at night, acquiring an image in front of the vehicle in real time, and detecting the light intensity of the image in front of the vehicle; when the fact that the light intensity in the image in front of the vehicle is larger than a first preset value is detected, processing the image in front of the vehicle through a preset light intensity processing model, so that the light intensity of the processed image in front of the vehicle is smaller than the light intensity in the image in front of the vehicle; and sending the processed image in front of the vehicle to a head-up display system, and projecting the processed image in front of the vehicle to a specified position on a windshield by the head-up display system for displaying. The driver can meet the vehicle by observing the display content on the appointed position on the windshield when the light intensity in front of the vehicle is strong, the danger of driving the vehicle with strong light when meeting at night is reduced, and the safety of driving at night is improved.

Description

Night driving vision auxiliary method and related equipment
Technical Field
The invention relates to the field of intelligent driving, in particular to a night driving vision auxiliary method and related equipment.
Background
Along with the development of society, the construction of road is also more and more, and the car becomes the selection of riding instead of walk of most families, brings the convenience, has also brought the degree of difficulty of management for the traffic, for example, in the driving process at night, can use light in a large number to carry out the road illumination, the light intensity of far-reaching headlamp is high, but can disturb the driver's sight of the car that comes from opposite side, the problem that influences traffic safety and drives easily takes place. Because the driver of the vehicle is not aware of the driver or the driver is intensively driving at any time, the driver may be disturbed by the sight line caused by the irregular use of the high beam, thereby increasing the dangerous degree of meeting at night.
Disclosure of Invention
The embodiment of the invention provides a night driving vision auxiliary method and related equipment, which can detect the light intensity of a vehicle to be driven through an image in front of the vehicle when the vehicle is driven at night, process the image in front of the vehicle when the light intensity in the image in front of the vehicle is higher to obtain an image with lower light intensity, and project the image with lower light intensity to a specified position on a windshield through a head-up display system to display, so that a driver can meet the vehicle by observing the display content on the specified position on the windshield when the light intensity in front of the vehicle is higher, the danger of driving the vehicle with stronger light when the vehicle meets at night is reduced, and the safety of driving at night is improved.
In a first aspect, an embodiment of the present invention provides a night driving visual assistance method, where the method includes:
when the vehicle is driven at night, acquiring an image in front of the vehicle in real time, and detecting the light intensity of the image in front of the vehicle;
when the fact that the light intensity in the image in front of the vehicle is larger than a first preset value is detected, processing the image in front of the vehicle through a preset light intensity processing model, so that the light intensity of the processed image in front of the vehicle is smaller than the light intensity in the image in front of the vehicle;
and sending the processed image in front of the vehicle to a head-up display system, and projecting the processed image in front of the vehicle to a specified position on a windshield by the head-up display system for displaying.
Optionally, the detecting the light intensity of the image in front of the vehicle includes:
segmenting the image in front of the vehicle according to a preset proportion to obtain a corresponding number of segmented images;
and inputting the segmentation image into a preset light intensity detection model, and processing the segmentation image through the light intensity detection model to obtain a light intensity detection result of the image in front of the vehicle.
Optionally, the segmenting the image in front of the vehicle according to a preset ratio to obtain a corresponding number of segmented images includes:
tiling the image in front of the vehicle through a 10 by 10 grid;
dividing by taking the upper left corner 6 x 6 of the image in front of the vehicle as a first divided area to obtain a first divided image;
dividing by taking the lower left corner 6 x 6 of the image in front of the vehicle as a second divided area to obtain a second divided image;
segmenting the vehicle front image by taking the upper right corner 6 x 6 of the vehicle front image as a third segmentation area to obtain a third segmentation image;
and segmenting the fourth segmentation region by taking the lower right corner 6 x 6 of the image in front of the vehicle to obtain a fourth segmentation image.
Optionally, the inputting the segmented image into a preset light intensity detection model, and processing the segmented image through the light intensity detection model to obtain a light intensity detection result of the image in front of the vehicle, includes:
and simultaneously inputting the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image into the preset light intensity detection model, and processing the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image through the preset light intensity detection model to obtain light intensity detection results of the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image.
Optionally, when it is detected that the light intensity in the image in front of the vehicle is greater than a first preset value, processing the image in front of the vehicle through a preset light intensity processing model includes:
when the fact that the light intensity in the image in front of the vehicle is larger than a first preset value is detected, obtaining an image to be processed according to the detection result, wherein the image to be processed is at least one of the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image, and the light intensity in the image is larger than the first preset value;
inputting the image to be processed into a preset light intensity processing model, processing the image to be processed through the preset light intensity processing model to obtain a target image corresponding to the image to be processed, wherein the light intensity of the target image is smaller than a first preset value.
Optionally, the preset light intensity processing model includes an image generation network, the inputting the image to be processed into the preset light intensity processing model, and processing the image to be processed through the preset light intensity processing model to obtain a target image corresponding to the image to be processed includes:
inputting the image to be processed into a preset light intensity processing model, and generating a target image corresponding to the image to be processed through an image generation network in the preset light intensity processing model.
Optionally, before the sending the processed image of the front of the vehicle to a head-up display system, and projecting the processed image of the front of the vehicle to a designated position on a windshield by the head-up display system for displaying, the method further includes:
when the fact that the light intensity in the image in front of the vehicle is smaller than a second preset value is detected, obtaining a display area image according to the detection result, wherein the display area image is one of the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image, the light intensity of which is smaller than a second preset value and the light intensity of which is the smallest, and the second preset value is smaller than the first preset value;
and determining the designated position on the windshield according to the position of the display area image in the image in front of the vehicle.
In a second aspect, an embodiment of the present invention provides a night driving visual auxiliary device, where the device includes:
the acquisition module is used for acquiring an image in front of a vehicle in real time and detecting the light intensity of the image in front of the vehicle when the vehicle is driven at night;
the processing module is used for processing the image in front of the vehicle through a preset light intensity processing model when detecting that the light intensity in the image in front of the vehicle is greater than a first preset value, so that the light intensity of the processed image in front of the vehicle is smaller than the light intensity in the image in front of the vehicle;
and the display module is used for sending the processed image in front of the vehicle to a head-up display system, and projecting the processed image in front of the vehicle to a specified position on a windshield through the head-up display system for displaying.
In a third aspect, an embodiment of the present invention provides an electronic device, including: the night driving vision assisting system comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the steps in the night driving vision assisting method provided by the embodiment of the invention.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the night driving visual assistance method provided by the embodiment of the present invention.
In the embodiment of the invention, when the vehicle is driven at night, the image in front of the vehicle is obtained in real time, and the light intensity of the image in front of the vehicle is detected; when the fact that the light intensity in the image in front of the vehicle is larger than a first preset value is detected, processing the image in front of the vehicle through a preset light intensity processing model, so that the light intensity of the processed image in front of the vehicle is smaller than the light intensity in the image in front of the vehicle; and sending the processed image in front of the vehicle to a head-up display system, and projecting the processed image in front of the vehicle to a specified position on a windshield by the head-up display system for displaying. The light intensity of the vehicle can be detected through the image in front of the vehicle when the vehicle is running at night, when the light intensity in the image in front of the vehicle is high, the image in front of the vehicle is processed to obtain the image with low light intensity, and the image with low light intensity is projected to the specified position on the windshield through the head-up display system to be displayed, so that the driver can meet the vehicle through observing the display content on the specified position on the windshield when the light intensity in front of the vehicle is high, the danger that the driver can run the vehicle with high light when meeting at night is reduced, and the safety of the vehicle running at night is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a night driving vision assisting method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a night driving vision assisting device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a night driving vision assisting method according to an embodiment of the present invention, and as shown in fig. 1, the night driving vision assisting method includes the following steps:
101. when the vehicle runs at night, the image in front of the vehicle is obtained in real time, and light intensity detection is carried out on the image in front of the vehicle.
In the embodiment of the present invention, the night driving visual assistance method may be mounted on a server or a vehicle-mounted system of a vehicle. When the night driving vision auxiliary method is carried on the server, a large number of vehicles can be accessed, and each vehicle can be accessed into the server through the user identification so as to obtain night driving vision auxiliary service provided by the server. When the night driving vision assisting method is carried on a vehicle-mounted machine system, each vehicle can obtain night driving vision assisting services through the vehicle-mounted machine.
When the vehicle is driven at night, the vehicle-mounted camera in the vehicle can shoot an image in front of the vehicle in real time, and then the light intensity of the image in front of the vehicle is detected according to an image detection technology, so that the light intensity of the image in front of the vehicle is detected.
Furthermore, the vehicle camera can be arranged above a driver to acquire a vehicle front image with a similar visual field with the driver, so that the light intensity can be detected more accurately.
Further, the detection result of the light detection of the image in front of the vehicle may be a light intensity value and a position where the high-intensity light appears.
Specifically, the image in front of the vehicle can be segmented according to a preset proportion to obtain a corresponding number of segmented images; and inputting the segmented image into a preset light intensity detection model, and processing the segmented image through the light intensity detection model to obtain a light intensity detection result of the image in front of the vehicle.
In the embodiment of the invention, the image in front of the vehicle is segmented according to the preset proportion, which can be average segmentation, for example, the image in front of the vehicle is segmented according to a Chinese character 'tian' shape, and the image in front of the vehicle is divided into an upper left corner image, an upper right corner image, a lower left corner image, a lower right corner image and the like. The average division may be performed in a "chuan" shape, and the vehicle front image may be divided into a left image, an intermediate image, a right image, and the like.
The preset light intensity detection model can be understood as a trained light intensity detection model, and the trained light intensity detection model can output a light intensity detection result of an image. The light intensity detection model can be a model constructed based on a convolutional neural network, the preset light intensity detection model is obtained through supervised training of a first data set, the first data set comprises a first sample image and annotation data, the first sample image corresponds to the annotation data, the annotation data comprises an expert for light position annotation and light intensity annotation, and the annotation data can be understood as a real result. The first data set is divided into a first training set and a first testing set, in the training process, a first sample image in the first training set is input into a light intensity detection model to be trained, a detection result of the first sample image is obtained through the light intensity detection model to be trained, the detection result of the first sample image and label data of the first sample image are subjected to error calculation, an error between the detection result and a real result is obtained, the light intensity detection model to be trained is adjusted through error back propagation, so that the detection result of the light intensity detection model to be trained is more and more similar to the real result, and the light intensity detection model can learn accurate detection of light intensity. And when the light intensity detection model is tested in the first test set, and the test result is converged, the trained light intensity detection model can be obtained.
In the embodiment of the invention, the image in front of the vehicle is segmented, the size of the image in front of the vehicle can be reduced, the detection speed can be improved for a convolutional neural network in a light intensity detection model, however, the segmented images are all input into a preset light intensity detection model, the global information of the image in front of the vehicle is still kept, and information loss is avoided.
Specifically, in the step of segmenting the image in front of the vehicle according to the preset proportion, the image in front of the vehicle may be tiled through 10 × 10 grids; dividing by taking the upper left corner 6 x 6 of the image in front of the vehicle as a first divided area to obtain a first divided image; dividing by taking the lower left corner 6 x 6 of the image in front of the vehicle as a second divided area to obtain a second divided image; segmenting the third segmentation area by taking the upper right corner 6 x 6 of the image in front of the vehicle to obtain a third segmentation image; and dividing the fourth divided region by taking the lower right corner 6 x 6 of the image in front of the vehicle to obtain a fourth divided image.
The image in front of the vehicle is tiled through 10 × 10 grids, the image in front of the vehicle is divided into four areas according to the size of 6 × 6 grids, and an overlapping area is arranged between each area and two adjacent areas, so that when convolution calculation is carried out on each area, edge information of the adjacent areas can be considered, information of each area is not isolated, and the detection accuracy of the light intensity detection model is improved. For example, when high intensity light was detected in first segmentation region, light can take place the diffusion for this characteristic of light diffusion can be caught at the overlap region to second segmentation region and third segmentation region, and this characteristic of light diffusion can be caught with the overlap region that can be at the upper angle in fourth segmentation region, thereby can judge more accurately that first segmentation region appears high intensity light.
After the segmentation area is determined, a first segmentation image is obtained through segmentation according to the first segmentation area, a second segmentation image is obtained through segmentation of the second segmentation area, a third segmentation image is obtained through segmentation according to the third segmentation area, and a fourth segmentation image is obtained through segmentation of the fourth segmentation area. The first divided image is an upper left corner image of the vehicle front image, left side edge information of the third divided image is included on the right side of the first divided image, upper side edge information of the second divided image is included on the lower side of the first divided image, and upper right corner edge information of the fourth divided image is included on the lower right corner of the first divided image. Similarly, the second divided image is a lower left corner image of the vehicle front image, the left side edge information of the fourth divided image is included on the right side of the second divided image, the lower side edge information of the first divided image is included on the upper side of the second divided image, and the upper left corner edge information of the second divided image is included on the upper right corner of the second divided image. The third divided image is an upper right corner image of the vehicle front image, right side edge information of the first divided image is included on the left side of the third divided image, upper side edge information of the fourth divided image is included on the lower side of the third divided image, and upper right corner edge information of the second divided image is included on the lower left corner of the third divided image. The fourth divided image is a lower right corner image of the vehicle front image, right side edge information of the second divided image is included on the left side of the fourth divided image, lower side edge information of the third divided image is included on the upper side of the fourth divided image, and lower right corner edge information of the first divided image is included on the upper left corner of the fourth divided image. Therefore, when the light intensity detection is carried out on each segmentation image, the light diffusion characteristics in the edge information can be effectively extracted, and the accuracy of the light intensity detection model is improved.
The preset light intensity detection model may process the segmented vehicle front image, and when the first segmented image, the second segmented image, the third segmented image, and the fourth segmented image are used as the input of the preset light intensity detection model. A second data set may be constructed to train the light intensity detection model, the second data set including second sample images and annotation data, each of the second sample images including a first segmented image, a second segmented image, a third segmented image, and a fourth segmented image, the segmentation method being as described in the above embodiments. Similarly, the second data set is divided into a second training set and a second testing set, the light intensity detection model to be trained is trained through the second training set, the light intensity detection model in the training process is tested through the second testing set, and the trained light intensity detection model can be obtained after the test result is converged.
After the trained light intensity detection model is obtained, the trained light intensity model is preset in a server or a vehicle machine system. After the image in front of the vehicle is segmented, the first segmented image, the second segmented image, the third segmented image and the fourth segmented image can be simultaneously input into a preset light intensity detection model, and light intensity detection results of the first segmented image, the second segmented image, the third segmented image and the fourth segmented image are obtained through processing of the preset light intensity detection model.
Light intensity detection is carried out on each segmented image through the trained light intensity detection model, light diffusion characteristics in edge information can be effectively extracted, and therefore the accuracy of the light intensity detection model is improved.
102. When the fact that the light intensity in the image in front of the vehicle is larger than a first preset value is detected, the image in front of the vehicle is processed through a preset light intensity processing model, and therefore the light intensity of the processed image in front of the vehicle is smaller than the light intensity in the image in front of the vehicle.
In an embodiment of the present invention, the light intensity detection result may include an appearance position of the high-intensity light and a light intensity value, and the appearance position of the high-intensity light may be at least one of the first divided image, the second divided image, the third divided image, and the fourth divided image.
The processing of the image in front of the vehicle may be performing light filtering processing on the image in front of the vehicle by using an image processing technology, filtering high-intensity light in the image in front of the vehicle, so that the light intensity of the processed image in front of the vehicle will be less than the light intensity in the image in front of the vehicle that is not subjected to the light filtering processing, and the driver can reduce interference of the high-intensity light on the sight by observing the processed image in front of the vehicle due to the low light intensity of the processed image in front of the vehicle.
Specifically, when it is detected that the light intensity in the image in front of the vehicle is greater than a first preset value, obtaining an image to be processed according to the detection result, where the image to be processed may be at least one of a first segmented image, a second segmented image, a third segmented image, and a fourth segmented image, where the light intensity is greater than the first preset value; inputting the image to be processed into a preset light intensity processing model, and processing the image to be processed through the preset light intensity processing model to obtain a target image corresponding to the image to be processed, wherein the light intensity of the target image is smaller than a first preset value.
For example, when the light intensity in one or more of the first, second, third, and fourth segmented images is detected to be greater than a first predetermined value, it may be determined that the light intensity in the image in front of the vehicle is greater than the first predetermined value. The segmented images with the light intensity larger than the first preset value in the first segmented image, the second segmented image, the third segmented image and the fourth segmented image can be used as images to be processed.
The segmented image with the high-intensity light can be used as the image to be processed, then the image to be processed is subjected to light filtering processing, the high-intensity light in the image in front of the vehicle is filtered, the light intensity of the obtained target image is smaller than the light intensity in the image in front of the vehicle which is not subjected to light filtering processing, and the target image is low in light intensity, so that the interference of the high-intensity light to the sight line can be reduced by a driver through observing the target image.
Specifically, the preset light intensity processing model includes an image generation network, and in the step of processing the image to be processed through the preset light intensity processing model to obtain the target image corresponding to the image to be processed, the image to be processed may be input into the preset light intensity processing model, and the target image corresponding to the image to be processed is generated through the image generation network in the preset light intensity processing model.
It should be noted that the preset light intensity processing model may be understood as a trained light intensity processing model, the light intensity processing model is constructed based on a generative confrontation network in a training process, the generative confrontation network includes a generative network and a discrimination network, after training of the generative confrontation network is completed, only the generative network is preset in a server or a vehicle machine system, and an image to be processed is generated into a target image through the generative network, so as to obtain the target image with the light intensity smaller than a first preset value.
Further, a third data set is constructed, wherein the third data set includes a third sample image and a fourth sample image, the third sample image can be a segmented image of high-intensity light, the fourth sample image can be a segmented image of low-intensity light, and only the variation of the light intensity exists in the third sample image and the fourth sample image. In the training process, inputting a third sample image into a generating network to obtain a generated image, comparing the generated image with a fourth sample image in a discrimination network, calculating the error and the similarity of the generated image and the fourth sample image, adjusting the parameters of the generating network through the error to enable the generated image generated by the generating network to obtain a higher score in the discrimination network, adjusting the discrimination network through the similarity to enable the discrimination network to generate a lower score for the generated image, completing a game, enabling the generated image generated by the generating network according to the third sample image to be closer to the fourth sample image, completing the training of the generating network when the discrimination network can not judge the generated image and the fourth sample image any more, presetting the generating network in a server or a vehicle machine system, and generating the image to be processed through the generating network, and obtaining the target image with lower light intensity.
103. And sending the processed image in front of the vehicle to a head-up display system, and projecting the processed image in front of the vehicle to a specified position on a windshield by the head-up display system for displaying.
In the embodiment of the present invention, the processed image in front of the vehicle may be sent to a HeaDS Up Display system (HUD), and because the light intensity of the processed image in front of the vehicle is greatly reduced, a driver may observe a road condition in front of the vehicle through the processed image in front of the vehicle, so that the interference of the high-intensity light in front of the vehicle on the sight of an enhanced person is reduced.
The designated position on the windshield may be a position where high intensity light is avoided, for example, when high intensity light appears in the upper left corner, the designated position may be the lower right corner or the upper right corner. The head-up display system can control the projection direction and position through the rotatable holder.
Specifically, when the fact that the light intensity in the image in front of the vehicle is smaller than a second preset value is detected, a display area image is obtained according to the detection result, the display area image is one of the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image, wherein the light intensity is smaller than the second preset value and the light intensity is the smallest, and the second preset value is smaller than the first preset value; the specified position on the windshield is determined based on the position of the display area image in the image in front of the vehicle.
It should be noted that the light detection result includes the position of occurrence of high-intensity light, the position of occurrence of low-intensity light and the corresponding light intensity, and light intensity is less than the second preset value, can explain that low-intensity light appears, can confirm that low-intensity light appears the position and throws the target image as the assigned position, can obtain to make the formation of image that the new line display system throws clearer, is favorable to strengthening more that personnel carry out the road conditions observation night.
The position of one of the first divided image, the second divided image, the third divided image and the fourth divided image, at which the light intensity is smaller than the second preset value and the light intensity is the smallest, can be used as a designated position, so that the target image is projected at the designated position.
In the embodiment of the invention, when the vehicle is driven at night, the image in front of the vehicle is obtained in real time, and the light intensity of the image in front of the vehicle is detected; when the fact that the light intensity in the image in front of the vehicle is larger than a first preset value is detected, processing the image in front of the vehicle through a preset light intensity processing model, so that the light intensity of the processed image in front of the vehicle is smaller than the light intensity in the image in front of the vehicle; and sending the processed image in front of the vehicle to a head-up display system, and projecting the processed image in front of the vehicle to a specified position on a windshield by the head-up display system for displaying. The light intensity of the vehicle can be detected through the image in front of the vehicle when the vehicle is running at night, when the light intensity in the image in front of the vehicle is high, the image in front of the vehicle is processed to obtain the image with low light intensity, and the image with low light intensity is projected to the specified position on the windshield through the head-up display system to be displayed, so that the driver can meet the vehicle through observing the display content on the specified position on the windshield when the light intensity in front of the vehicle is high, the danger that the driver can run the vehicle with high light when meeting at night is reduced, and the safety of the vehicle running at night is improved.
It should be noted that the night driving visual assistance method provided by the embodiment of the invention can be applied to smart phones, computers, servers and other devices capable of performing night driving visual assistance.
Optionally, referring to fig. 2, fig. 2 is a schematic structural diagram of a night driving vision assisting device according to an embodiment of the present invention, and as shown in fig. 2, the device includes:
the system comprises an acquisition module 201, a light intensity detection module and a control module, wherein the acquisition module is used for acquiring an image in front of a vehicle in real time and detecting the light intensity of the image in front of the vehicle when the vehicle is driving at night;
the processing module 202 is configured to, when it is detected that the light intensity in the image in front of the vehicle is greater than a first preset value, process the image in front of the vehicle through a preset light intensity processing model, so that the light intensity of the processed image in front of the vehicle is smaller than the light intensity in the image in front of the vehicle;
and the display module 203 is configured to send the processed image in front of the vehicle to a head-up display system, and project the processed image in front of the vehicle to a specified position on a windshield by the head-up display system for display.
Optionally, the obtaining module 201 includes:
the segmentation unit is used for segmenting the images in front of the vehicle according to a preset proportion to obtain a corresponding number of segmented images;
and the first processing unit is used for inputting the segmentation image into a preset light intensity detection model and processing the segmentation image through the light intensity detection model to obtain a light intensity detection result of the image in front of the vehicle.
Optionally, the segmentation unit includes:
a tiling subunit configured to tile the vehicle front image by a 10 × 10 grid;
the first segmentation subunit is used for segmenting the vehicle front image by taking the upper left corner 6 x 6 of the vehicle front image as a first segmentation area to obtain a first segmentation image;
the second segmentation subunit is used for segmenting the left lower corner 6 x 6 of the image in front of the vehicle as a second segmentation area to obtain a second segmentation image;
the third segmentation subunit is used for segmenting the vehicle front image by taking the upper right corner 6 x 6 of the vehicle front image as a third segmentation area to obtain a third segmentation image;
and the fourth segmentation subunit is used for segmenting the fourth segmentation region by taking the lower right corner 6 x 6 of the image in front of the vehicle as the fourth segmentation region to obtain a fourth segmentation image.
Optionally, the processing unit is further configured to input the first segmented image, the second segmented image, the third segmented image, and the fourth segmented image into the preset light intensity detection model at the same time, and obtain light intensity detection results of the first segmented image, the second segmented image, the third segmented image, and the fourth segmented image through processing of the preset light intensity detection model.
Optionally, the processing module 202 includes:
the detection unit is used for obtaining an image to be processed according to the detection result when detecting that the light intensity in the image in front of the vehicle is greater than a first preset value, wherein the image to be processed is at least one of the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image, and the light intensity in the image is greater than the first preset value;
and the second processing unit is used for inputting the image to be processed into a preset light intensity processing model, processing the image to be processed through the preset light intensity processing model, and obtaining a target image corresponding to the image to be processed, wherein the light intensity of the target image is smaller than a first preset value.
Optionally, the processing unit is further configured to input the image to be processed into a preset light intensity processing model, and generate a target image corresponding to the image to be processed through an image generation network in the preset light intensity processing model.
Optionally, before the displaying module 203, the apparatus further includes:
the detection module is used for obtaining a display area image according to the detection result when the light intensity in the image in front of the vehicle is detected to be smaller than a second preset value, wherein the display area image is one of the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image, the light intensity of which is smaller than a second preset value and the light intensity of which is the smallest, and the second preset value is smaller than the first preset value;
and the determining module is used for determining the designated position on the windshield according to the position of the display area image in the image in front of the vehicle.
It should be noted that the night driving visual assistance device provided by the embodiment of the present invention can be applied to smart phones, computers, servers, and other devices that can perform night driving visual assistance.
The night driving vision auxiliary device provided by the embodiment of the invention can realize each process realized by the night driving vision auxiliary method in the method embodiment, and can achieve the same beneficial effect. To avoid repetition, further description is omitted here.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 3, including: memory 302, processor 301 and a computer program of a night driving vision assistance method stored on the memory 302 and executable on the processor 301, wherein:
the processor 301 is configured to call the computer program stored in the memory 302, and execute the following steps:
when the vehicle runs at night, acquiring an image in front of the vehicle in real time, and detecting the light intensity of the image in front of the vehicle;
when the fact that the light intensity in the image in front of the vehicle is larger than a first preset value is detected, processing the image in front of the vehicle through a preset light intensity processing model, so that the light intensity of the processed image in front of the vehicle is smaller than the light intensity in the image in front of the vehicle;
and sending the processed image in front of the vehicle to a head-up display system, and projecting the processed image in front of the vehicle to a specified position on a windshield by the head-up display system for displaying.
Optionally, the performing, by the processor 301, the light intensity detection on the image in front of the vehicle includes:
segmenting the image in front of the vehicle according to a preset proportion to obtain a corresponding number of segmented images;
and inputting the segmentation image into a preset light intensity detection model, and processing the segmentation image through the light intensity detection model to obtain a light intensity detection result of the image in front of the vehicle.
Optionally, the segmenting the image in front of the vehicle according to a preset ratio by the processor 301 to obtain a corresponding number of segmented images includes:
tiling the image in front of the vehicle through a 10 by 10 grid;
dividing by taking the upper left corner 6 x 6 of the image in front of the vehicle as a first divided area to obtain a first divided image;
dividing by taking the lower left corner 6 x 6 of the image in front of the vehicle as a second divided region to obtain a second divided image;
segmenting the vehicle front image by taking the upper right corner 6 x 6 of the vehicle front image as a third segmentation area to obtain a third segmentation image;
and segmenting the fourth segmentation region by taking the lower right corner 6 x 6 of the image in front of the vehicle to obtain a fourth segmentation image.
Optionally, the inputting, by the processor 301, the segmented image into a preset light intensity detection model, and obtaining a light intensity detection result of the image in front of the vehicle through processing by the light intensity detection model, includes:
and simultaneously inputting the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image into the preset light intensity detection model, and processing the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image through the preset light intensity detection model to obtain light intensity detection results of the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image.
Optionally, the processing, performed by the processor 301, when it is detected that the light intensity in the image in front of the vehicle is greater than a first preset value, the processing, performed by a preset light intensity processing model, of the image in front of the vehicle includes:
when the fact that the light intensity in the image in front of the vehicle is larger than a first preset value is detected, obtaining an image to be processed according to the detection result, wherein the image to be processed is at least one of the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image, and the light intensity in the image is larger than the first preset value;
inputting the image to be processed into a preset light intensity processing model, processing the image to be processed through the preset light intensity processing model to obtain a target image corresponding to the image to be processed, wherein the light intensity of the target image is smaller than a first preset value.
Optionally, the preset light intensity processing model executed by the processor 301 includes an image generation network, the inputting the image to be processed into the preset light intensity processing model, and processing the image to be processed through the preset light intensity processing model to obtain a target image corresponding to the image to be processed includes:
inputting the image to be processed into a preset light intensity processing model, and generating a target image corresponding to the image to be processed through an image generation network in the preset light intensity processing model.
Optionally, before the sending the processed vehicle front image to a head-up display system and projecting the processed vehicle front image to a designated position on a windshield for display by the head-up display system, the method executed by the processor 301 further includes:
when the fact that the light intensity in the image in front of the vehicle is smaller than a second preset value is detected, obtaining a display area image according to the detection result, wherein the display area image is one of the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image, the light intensity of which is smaller than a second preset value and the light intensity of which is the smallest, and the second preset value is smaller than the first preset value;
and determining the designated position on the windshield according to the position of the display area image in the image in front of the vehicle.
The electronic equipment provided by the embodiment of the invention can realize each process realized by the night driving vision auxiliary method in the method embodiment, and can achieve the same beneficial effect. To avoid repetition, further description is omitted here.
The embodiment of the present invention further provides a computer readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements each process of the night driving visual assistance method or the application end night driving visual assistance method provided in the embodiment of the present invention, and can achieve the same technical effect, and in order to avoid repetition, the computer program is not described here again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (10)

1. A night driving visual auxiliary method is characterized by comprising the following steps:
when the vehicle is driven at night, acquiring an image in front of the vehicle in real time, and detecting the light intensity of the image in front of the vehicle;
when the fact that the light intensity in the image in front of the vehicle is larger than a first preset value is detected, processing the image in front of the vehicle through a preset light intensity processing model, so that the light intensity of the processed image in front of the vehicle is smaller than the light intensity in the image in front of the vehicle;
and sending the processed image in front of the vehicle to a head-up display system, and projecting the processed image in front of the vehicle to a specified position on a windshield through the head-up display system for displaying.
2. The method of claim 1, wherein the detecting the light intensity of the image in front of the vehicle comprises:
segmenting the image in front of the vehicle according to a preset proportion to obtain a corresponding number of segmented images;
and inputting the segmentation image into a preset light intensity detection model, and processing the segmentation image through the light intensity detection model to obtain a light intensity detection result of the image in front of the vehicle.
3. The method according to claim 2, wherein the segmenting the image in front of the vehicle according to the preset proportion to obtain a corresponding number of segmented images comprises:
tiling the image in front of the vehicle through a 10 by 10 grid;
dividing by taking the upper left corner 6 x 6 of the image in front of the vehicle as a first divided area to obtain a first divided image;
dividing by taking the lower left corner 6 x 6 of the image in front of the vehicle as a second divided area to obtain a second divided image;
segmenting the vehicle front image by taking the upper right corner 6 x 6 of the vehicle front image as a third segmentation area to obtain a third segmentation image;
and segmenting the fourth segmentation region by taking the lower right corner 6 x 6 of the image in front of the vehicle to obtain a fourth segmentation image.
4. The method as claimed in claim 3, wherein the inputting the segmentation image into a preset light intensity detection model, and the processing by the light intensity detection model to obtain the light intensity detection result of the image in front of the vehicle comprises:
and simultaneously inputting the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image into the preset light intensity detection model, and processing the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image through the preset light intensity detection model to obtain light intensity detection results of the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image.
5. The method of claim 4, wherein when it is detected that the light intensity in the image in front of the vehicle is greater than a first preset value, processing the image in front of the vehicle through a preset light intensity processing model comprises:
when the fact that the light intensity in the image in front of the vehicle is larger than a first preset value is detected, obtaining an image to be processed according to the detection result, wherein the image to be processed is at least one of the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image, and the light intensity in the image is larger than the first preset value;
inputting the image to be processed into a preset light intensity processing model, processing the image to be processed through the preset light intensity processing model to obtain a target image corresponding to the image to be processed, wherein the light intensity of the target image is smaller than a first preset value.
6. The method as claimed in claim 5, wherein the preset light intensity processing model includes an image generation network, the inputting the image to be processed into the preset light intensity processing model, and the processing the image to be processed by the preset light intensity processing model to obtain a target image corresponding to the image to be processed comprises:
inputting the image to be processed into a preset light intensity processing model, and generating a target image corresponding to the image to be processed through an image generation network in the preset light intensity processing model.
7. The method of claim 4, wherein before sending the processed image of the front of the vehicle to a heads-up display system, the processed image of the front of the vehicle being projected by the heads-up display system onto a designated location on a windshield for display, the method further comprises:
when the fact that the light intensity in the image in front of the vehicle is smaller than a second preset value is detected, obtaining a display area image according to the detection result, wherein the display area image is one of the first segmentation image, the second segmentation image, the third segmentation image and the fourth segmentation image, the light intensity of which is smaller than a second preset value and the light intensity of which is the smallest, and the second preset value is smaller than the first preset value;
and determining the designated position on the windshield according to the position of the display area image in the image in front of the vehicle.
8. A night driving visual aid, the device comprising:
the acquisition module is used for acquiring an image in front of a vehicle in real time and detecting the light intensity of the image in front of the vehicle when the vehicle is driven at night;
the processing module is used for processing the image in front of the vehicle through a preset light intensity processing model when detecting that the light intensity in the image in front of the vehicle is greater than a first preset value, so that the light intensity of the processed image in front of the vehicle is smaller than the light intensity in the image in front of the vehicle;
and the display module is used for sending the processed image in front of the vehicle to a head-up display system, and projecting the processed image in front of the vehicle to a specified position on a windshield through the head-up display system for displaying.
9. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the night driving visual assistance method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the night driving visual assistance method according to any one of claims 1 to 7.
CN202111646632.1A 2021-12-29 2021-12-29 Night driving vision auxiliary method and related equipment Pending CN114445797A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111646632.1A CN114445797A (en) 2021-12-29 2021-12-29 Night driving vision auxiliary method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111646632.1A CN114445797A (en) 2021-12-29 2021-12-29 Night driving vision auxiliary method and related equipment

Publications (1)

Publication Number Publication Date
CN114445797A true CN114445797A (en) 2022-05-06

Family

ID=81366213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111646632.1A Pending CN114445797A (en) 2021-12-29 2021-12-29 Night driving vision auxiliary method and related equipment

Country Status (1)

Country Link
CN (1) CN114445797A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108248508A (en) * 2016-12-29 2018-07-06 乐视汽车(北京)有限公司 Driving safety display methods, system, medium and electronic equipment
CN110135235A (en) * 2019-03-13 2019-08-16 北京车和家信息技术有限公司 A kind of dazzle processing method, device and vehicle
CN110263714A (en) * 2019-06-20 2019-09-20 百度在线网络技术(北京)有限公司 Method for detecting lane lines, device, electronic equipment and storage medium
CN111783878A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and readable storage medium
CN112183428A (en) * 2020-10-09 2021-01-05 浙江大学中原研究院 Wheat planting area segmentation and yield prediction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108248508A (en) * 2016-12-29 2018-07-06 乐视汽车(北京)有限公司 Driving safety display methods, system, medium and electronic equipment
CN110135235A (en) * 2019-03-13 2019-08-16 北京车和家信息技术有限公司 A kind of dazzle processing method, device and vehicle
CN110263714A (en) * 2019-06-20 2019-09-20 百度在线网络技术(北京)有限公司 Method for detecting lane lines, device, electronic equipment and storage medium
CN111783878A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and readable storage medium
CN112183428A (en) * 2020-10-09 2021-01-05 浙江大学中原研究院 Wheat planting area segmentation and yield prediction method

Similar Documents

Publication Publication Date Title
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
CN105769120B (en) Method for detecting fatigue driving and device
CN110703904A (en) Augmented virtual reality projection method and system based on sight tracking
CN112677977B (en) Driving state identification method and device, electronic equipment and steering lamp control method
US20190180132A1 (en) Method and Apparatus For License Plate Recognition Using Multiple Fields of View
CN110341621B (en) Obstacle detection method and device
CN113255444A (en) Training method of image recognition model, image recognition method and device
US11403865B2 (en) Number-of-occupants detection system, number-of-occupants detection method, and program
CN111091104A (en) Target object protection detection method, device, equipment and storage medium
CN108256487B (en) Driving state detection device and method based on reverse dual-purpose
CN111422203B (en) Driving behavior evaluation method and device
CN114445797A (en) Night driving vision auxiliary method and related equipment
CN109101908B (en) Method and device for detecting region of interest in driving process
CN116563801A (en) Traffic accident detection method, device, electronic equipment and medium
CN116616691A (en) Man-machine interaction vision detection method and system based on virtual reality
CN116433544A (en) Vehicle environment monitoring method, device, equipment and storage medium
CN113421191A (en) Image processing method, device, equipment and storage medium
CN109960034B (en) System and method for adjusting brightness of head-up display
JP2021113753A (en) Fogging determination device and fogging determination method
CN116152761B (en) Lane line detection method and device
CN113619600B (en) Obstacle data diagnosis method, obstacle data diagnosis device, movable carrier, and storage medium
CN116152790B (en) Safety belt detection method and device
CN109389073A (en) The method and device of detection pedestrian area is determined by vehicle-mounted camera
CN114291077B (en) Vehicle anti-collision early warning method and device
CN113570901B (en) Vehicle driving assisting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination