CN118071658A - Image processing method, apparatus, electronic device, and computer-readable storage medium - Google Patents
Image processing method, apparatus, electronic device, and computer-readable storage medium Download PDFInfo
- Publication number
- CN118071658A CN118071658A CN202211455715.7A CN202211455715A CN118071658A CN 118071658 A CN118071658 A CN 118071658A CN 202211455715 A CN202211455715 A CN 202211455715A CN 118071658 A CN118071658 A CN 118071658A
- Authority
- CN
- China
- Prior art keywords
- image
- color
- target
- region
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 118
- 238000012937 correction Methods 0.000 claims abstract description 96
- 238000000034 method Methods 0.000 claims abstract description 67
- 230000011218 segmentation Effects 0.000 claims abstract description 48
- 230000004927 fusion Effects 0.000 claims description 69
- 230000008569 process Effects 0.000 claims description 36
- 230000009467 reduction Effects 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 17
- 238000010586 diagram Methods 0.000 claims description 15
- 238000012216 screening Methods 0.000 claims description 13
- 230000001965 increasing effect Effects 0.000 claims description 9
- 238000011946 reduction process Methods 0.000 claims description 6
- 230000002829 reductive effect Effects 0.000 description 16
- 230000035945 sensitivity Effects 0.000 description 10
- 238000006243 chemical reaction Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 8
- 230000007797 corrosion Effects 0.000 description 6
- 238000005260 corrosion Methods 0.000 description 6
- 230000015572 biosynthetic process Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 5
- 230000000877 morphologic effect Effects 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 5
- 238000007499 fusion processing Methods 0.000 description 4
- 230000003313 weakening effect Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 206010034960 Photophobia Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003705 background correction Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 208000013469 light sensitivity Diseases 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000036632 reaction speed Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10144—Varying exposure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20208—High dynamic range [HDR] image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The application relates to an image processing method, an image processing device, an electronic device and a computer readable storage medium. The method comprises the following steps: acquiring an image to be processed, wherein the image to be processed is generated based on a first image and at least one frame of intermediate image, and the exposure time of the first image is longer than the exposure time of the intermediate image; performing brightness segmentation processing on at least one of the first image and at least one frame of the intermediate image to obtain a target brightness region in the image to be processed; and under the condition that a target color area in the target brightness area meets the color correction condition, performing color correction processing on the target color area to obtain a target image. By adopting the method, the color of the image can be accurately corrected.
Description
Technical Field
The present application relates to imaging technology, and in particular, to an image processing method, apparatus, electronic device, computer readable storage medium, and computer program product.
Background
When shooting in scenes such as night scenes and indoor scenes containing artificial light sources, in order to make up for the situation that only one frame of image is acquired and details of the image cannot be clearly displayed, the images are often processed in a high dynamic range imaging (HDRI, high Dynamic Range Image) mode, namely, an image with wider dynamic range and more details is synthesized based on multiple frames of images acquired under different exposure time lengths.
However, multiple frames of images with different exposure time lengths acquired under the same scene have differences in image colors, so that the fused image has deviations in colors from the scene seen by human eyes.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer readable storage medium, which can accurately correct colors of images.
An image processing method, the method comprising:
Acquiring an image to be processed, wherein the image to be processed is generated based on a first image and at least one frame of intermediate image, and the exposure time of the first image is longer than the exposure time of the intermediate image;
Performing brightness segmentation processing on at least one of the first image and at least one frame of the intermediate image to obtain a target brightness region in the image to be processed;
And under the condition that a target color area in the target brightness area meets the color correction condition, performing color correction processing on the target color area to obtain a target image.
An image processing apparatus, the apparatus comprising:
The device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image to be processed, the image to be processed is generated based on a first image and at least one frame of intermediate image, and the exposure time of the first image is longer than the exposure time of the intermediate image;
The brightness segmentation module is used for carrying out brightness segmentation processing on at least one of the first image and at least one frame of the intermediate image to obtain a target brightness area in the image to be processed;
And the color correction module is used for carrying out color correction processing on the target color area to obtain a target image under the condition that the target color area in the target brightness area meets the color correction condition.
An electronic device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
Acquiring an image to be processed, wherein the image to be processed is generated based on a first image and at least one frame of intermediate image, and the exposure time of the first image is longer than the exposure time of the intermediate image;
Performing brightness segmentation processing on at least one of the first image and at least one frame of the intermediate image to obtain a target brightness region in the image to be processed;
And under the condition that a target color area in the target brightness area meets the color correction condition, performing color correction processing on the target color area to obtain a target image.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring an image to be processed, wherein the image to be processed is generated based on a first image and at least one frame of intermediate image, and the exposure time of the first image is longer than the exposure time of the intermediate image;
Performing brightness segmentation processing on at least one of the first image and at least one frame of the intermediate image to obtain a target brightness region in the image to be processed;
And under the condition that a target color area in the target brightness area meets the color correction condition, performing color correction processing on the target color area to obtain a target image.
A computer program product comprising a computer program which when executed by a processor performs the steps of:
Acquiring an image to be processed, wherein the image to be processed is generated based on a first image and at least one frame of intermediate image, and the exposure time of the first image is longer than the exposure time of the intermediate image;
Performing brightness segmentation processing on at least one of the first image and at least one frame of the intermediate image to obtain a target brightness region in the image to be processed;
And under the condition that a target color area in the target brightness area meets the color correction condition, performing color correction processing on the target color area to obtain a target image.
The image processing method, the device, the electronic equipment, the computer readable storage medium and the computer program product are used for obtaining the image to be processed, the obtained image to be processed is generated based on the first image and at least one frame of intermediate image, the exposure time of the first image is longer than the exposure time of the intermediate image, and at least one of the first image and the at least one frame of intermediate image is subjected to brightness segmentation processing to obtain the target brightness area in the image to be processed, so that the image area possibly having color deviation can be judged based on brightness. Under the condition that a target color area in the target brightness area meets the color correction condition, the color correction processing is performed on the target color area when the color deviation exists in the target color area, and the color correction can be accurately performed on the area with the color deviation in the image, so that the color of the obtained target image is more consistent with that of an actual scene seen by human eyes.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a diagram of an application environment for an image processing method in one embodiment;
FIG. 2 is a flow chart of an image processing method in one embodiment;
FIG. 3 is a schematic diagram of a first image, a second image, and a third image obtained at different exposure durations in one embodiment;
FIG. 4 is a schematic diagram of image changes for color reproduction in one embodiment;
FIG. 5 is a flow chart of an image processing method in one embodiment;
FIG. 6 is a flowchart of an image processing method in another embodiment;
FIG. 7 is a block diagram showing the structure of an image processing apparatus in one embodiment;
fig. 8 is an internal structural diagram of an electronic device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It will be understood that the terms first, second, etc. as used herein may be used to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element. For example, a first image may be referred to as a second image, and similarly, a second image may be referred to as a first image, without departing from the scope of the application. Both the first image and the second image are images, but they are not the same image.
The image processing method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. In one embodiment, the electronic device 102 and the server 104 may each perform the image processing method separately, and the electronic device 102 and the server 104 may also cooperatively perform the image processing method. When the electronic device 102 and the server 104 cooperatively execute the image processing method, the electronic device 102 acquires an image to be processed, which is generated based on a first image and at least one frame of intermediate image, and sends the image to the server 104, and the exposure time of the first image is longer than the exposure time of the intermediate image. The server 104 performs a luminance segmentation process on at least one of the first image and at least one frame of intermediate image, to obtain a target luminance region in the image to be processed. The server 104 performs color correction processing on the target color area to obtain a target image in the case where the target color area in the target luminance area satisfies the color correction condition. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, an image processing method is provided, and the method is applied to the electronic device in fig. 1 for illustration, and includes the following steps:
step 202, acquiring an image to be processed, wherein the image to be processed is generated based on a first image and at least one frame of intermediate image, and the exposure time of the first image is longer than the exposure time of the intermediate image.
The image to be processed may be a High-DYNAMIC RANGE (HDR for short), specifically any one of an RGB (Red, green, blue) image, a RAW image, a gray image, a depth image, an image corresponding to a Y component in a YUV image, and the like. The RAW image is RAW data of converting the captured light source signal into a digital signal by the image sensor. "Y" in YUV images represents brightness (Luminance or Luma), i.e., gray scale values, and "U" and "V" represent chromaticity (Chrominance or Chroma), which are used to describe image color and saturation for a given pixel color. The first image may be any one of an RGB image, a RAW image, a gray image, a depth image, and a YUV image.
The first image and the intermediate image may be images acquired for an arbitrary scene, such as a person, an animal image, a landscape image, or an industrial device image, but are not limited thereto. The first image and the intermediate image are images of the same scene, and the image to be processed is generated based on the first image and the at least one frame of intermediate image.
In an alternative embodiment, the first image and the intermediate image may be preview images, and the image to be processed is an image generated based on the preview images.
It will be appreciated that the image to be processed, the first image and the intermediate image may be a complete image or may be a partial image area in a complete image.
The brightness of the image to be processed is different from the brightness of the first image, and the brightness of the image to be processed is different from the brightness of at least one frame of intermediate image. The brightness of the image to be processed is different from the brightness of the first image, which means that the brightness value of the pixel in the image to be processed is different from the brightness value of the pixel in the first image, specifically, the brightness value corresponding to at least one pixel in the image to be processed is different from the brightness value of the corresponding pixel in the first image. The brightness of the image to be processed is different from the brightness of at least one frame of intermediate image, which means that the brightness value of the pixel in the image to be processed is different from the brightness value of the pixel in at least one frame of intermediate image, specifically, the brightness value corresponding to at least one pixel in the image to be processed is different from the brightness value of the corresponding pixel in at least one frame of intermediate image.
Specifically, the electronic device can acquire an image of any scene through the camera to obtain a first image. After the first image is obtained, the electronic device can shorten the exposure time of the camera and acquire images of the same scene to obtain at least one frame of intermediate image. The exposure time of the first image is longer than the exposure time of the intermediate image. And the electronic equipment performs fusion processing on the first image and at least one frame of intermediate image to obtain an image to be processed.
In this embodiment, the electronic device obtains a fusion weight map corresponding to the first image and a fusion weight map corresponding to at least one frame of intermediate image. The fusion weight map comprises pixel weights corresponding to the pixels of the corresponding image when fusion processing is carried out. And according to each fusion weight graph, carrying out fusion processing on the first image and at least one frame of intermediate image to obtain an image to be processed.
And 204, performing brightness segmentation processing on at least one of the first image and at least one frame of intermediate image to obtain a target brightness region in the image to be processed.
The luminance division process refers to determining a target luminance area in an image based on the luminance of each pixel in the image.
The target luminance region refers to a region satisfying a luminance condition in the image to be processed, and may be a highlight region, for example. The luminance condition may specifically be an area formed by each pixel whose luminance is greater than the luminance threshold value, or an area formed by each pixel whose luminance difference value of each pixel of at least two frames of images is greater than the difference threshold value. For example, the target luminance area in the image to be processed may be an area formed by pixels whose luminance is greater than a luminance threshold in the image to be processed, or may be an area formed by pixels whose luminance difference between pixels of the first image of the image to be processed and corresponding pixels of at least one frame of intermediate image is greater than a difference threshold, corresponding to the corresponding pixels in the image to be processed. The target brightness area in the image to be processed can also be formed by corresponding to the area formed by the corresponding pixels in the image to be processed in each pixel of at least one frame of intermediate image, wherein the brightness of each target pixel is larger than the brightness threshold value.
Specifically, the electronic device may perform a luminance segmentation process on the first image to determine a target luminance region in the first image. In the image to be processed, an area corresponding to the target luminance area in the first image is determined as the target luminance area of the image to be processed.
Or the electronic device may perform a luminance segmentation process on the at least one frame of intermediate image to determine a target luminance region in the at least one frame of intermediate image. In the image to be processed, an area corresponding to the target brightness area in at least one frame of intermediate image is determined as the target brightness area of the image to be processed.
In this embodiment, the electronic device may perform the luminance segmentation process on the first image and the at least one frame of intermediate image to determine the target luminance area in the first image and the at least one frame of intermediate image. In the image to be processed, an area corresponding to a target brightness area in the target brightness areas in the first image and at least one frame of intermediate image is determined as the target brightness area of the image to be processed.
In step 206, in the case that the target color area in the target brightness area satisfies the color correction condition, the color correction process is performed on the target color area, so as to obtain the target image.
Wherein the color correction conditions include at least one of a color enhancement condition and a color reduction condition, and the color correction process includes at least one of a color enhancement process and a color reduction process; the target color region includes at least one of at least one first color region for performing color enhancement and a second color region for performing color reproduction.
Specifically, in the case where at least one first color region in the target luminance region satisfies the color enhancement condition, color correction processing is performed on the at least one first color region, resulting in a target image.
And under the condition that the second color area in the target brightness area meets the color restoration condition, performing color restoration processing on the second color area to obtain a target image.
And under the condition that at least one first color area in the target brightness area meets the color enhancement condition and a second color area in the target brightness area meets the color restoration condition, performing color correction processing on the at least one first color area and performing color restoration processing on the second color area to obtain a target image.
According to the image processing method, the image to be processed is generated based on the first image and at least one frame of intermediate image, the exposure time of the first image is longer than the exposure time of the intermediate image, at least one of the first image and the at least one frame of intermediate image is subjected to brightness segmentation processing, the target brightness area in the image to be processed is obtained, and the image area possibly having color deviation can be judged based on brightness. Under the condition that a target color area in the target brightness area meets the color correction condition, the color correction processing is performed on the target color area when the color deviation exists in the target color area, and the color correction can be accurately performed on the area with the color deviation in the image, so that the color of the obtained target image is more consistent with that of an actual scene seen by human eyes.
In one embodiment, the at least one frame of intermediate image includes a second image and a third image, the second image having an exposure time longer than an exposure time of the third image; acquiring an image to be processed, including: acquiring a first image, a second image and a third image; and carrying out fusion processing according to the fusion weight graphs corresponding to the first image, the second image and the third image respectively to obtain an image to be processed.
Specifically, the electronic device can acquire an image of any scene through the camera to obtain a first image. After the first image is obtained, the electronic device can shorten the exposure time of the camera and acquire images of the same scene to obtain a second image. After the second image is obtained, the electronic device can continuously shorten the exposure time of the camera and acquire images of the same scene to obtain a third image. The exposure time of the first image is longer than the exposure time of the second image, and the exposure time of the second image is longer than the exposure time of the third image.
In this embodiment, the electronic device may acquire, by using the camera, an image of an arbitrary scene during a first exposure period, to obtain at least one first image. The electronic equipment can adjust the exposure time of the camera from the first exposure time to the second exposure time, and acquire images of the same scene under the second exposure time to obtain a second image. The first exposure time period is longer than the second exposure time period. The electronic equipment can adjust the exposure time of the camera from the second exposure time to a third exposure time, and acquire images of the same scene under the third exposure time to obtain a third image. The second exposure time period is longer than the third exposure time period.
It will be appreciated that the order of acquisition of the first image, the second image and the third image is not limited, and any one of the first image, the second image and the third image may be acquired first until three images with different exposure durations are obtained.
In one embodiment, the at least one frame of intermediate image includes a second image and a third image, the second image having an exposure time longer than an exposure time of the third image; performing brightness segmentation processing on at least one of the first image and at least one frame of intermediate image to obtain a target brightness region in the image to be processed, wherein the method comprises the following steps:
Performing brightness segmentation processing on at least one of the first image and the second image to obtain an initial brightness region in the image to be processed; according to the second image and the corresponding first fusion weight diagram, and the third image and the corresponding second fusion weight diagram, the initial brightness area is adjusted, and a target brightness area is obtained; the first fused weight map includes first pixel weights for each pixel in the second image and the second fused weight map includes second pixel weights for each pixel in the third image.
Specifically, the at least one frame of intermediate image includes a second image and a third image, and the second image has an exposure time longer than an exposure time of the third image. The electronic device performs a luminance segmentation process on at least one of the first image and the second image to determine an initial luminance region in the at least one of the first image and the second image. And determining a corresponding area in the image to be processed according to the determined initial brightness area, and obtaining the initial brightness area in the image to be processed.
The electronic equipment acquires a first fusion weight map corresponding to the second image and a second fusion weight map corresponding to the third image, wherein the first fusion weight map comprises first pixel weights corresponding to all pixels in the second image when the pixels are fused, and the second fusion weight map comprises second pixel weights corresponding to all pixels in the third image when the pixels are fused.
The electronic equipment screens out a first target pixel from the second image according to the first fusion weight map, and screens out the first target pixel from the third image according to the second fusion weight map. The electronic equipment adjusts the initial brightness area in the image to be processed according to the screened first target pixel to obtain the target brightness area in the image to be processed.
In this embodiment, according to the second image and the corresponding first fusion weight map, and the third image and the corresponding second fusion weight map, the initial brightness region is adjusted to obtain the target brightness region, which includes:
According to the second image and the corresponding first fusion weight diagram, and the third image and the corresponding second fusion weight diagram, the initial brightness area is adjusted, and an adjusted brightness area is obtained; and carrying out morphological processing on the adjusted brightness region to obtain a target brightness region.
The morphological treatment can be filter expansion, corrosion, or communication treatment. The brightness area can be enlarged through morphological processing, and omission is avoided.
In this embodiment, at least one of the first image and the second image is subjected to brightness segmentation processing to obtain an initial brightness region in the image to be processed, so as to primarily determine a brightness region meeting a brightness condition in the image to be processed. The exposure time length of the second image is longer than that of the third image, the second image and the third image both belong to dark frame images, the first fusion weight map comprises first pixel weights of all pixels in the second image, the second fusion weight map comprises second pixel weights of all pixels in the third image, and then the first fusion weight map and the second fusion weight map can reflect pixel distribution of a brighter area in the images. And adjusting the initial brightness region according to the second image and the corresponding first fusion weight map and the third image and the corresponding second fusion weight map, so that the target brightness region in the image to be processed can be more accurately determined.
In one embodiment, performing a luminance segmentation process on at least one of the first image and the second image to obtain an initial luminance region in the image to be processed, including:
Determining a luminance difference between each pixel in the first image and a corresponding pixel in the second image; screening out a first target pixel from each pixel of the first image and each pixel of the second image according to each brightness difference value;
And determining an initial brightness area formed by corresponding pixels in the image to be processed according to each first target pixel.
Specifically, the electronic device determines each pixel in the first image, and calculates a luminance difference value between a luminance value of each pixel in the first image and a luminance value of a corresponding pixel in the second image and corresponding pixels in the first image, so as to obtain each luminance difference value. And screening out a first target pixel from the pixels of the first image according to the brightness difference values, and screening out the first target pixel from the pixels of the second image according to the brightness difference values. In the image to be processed, an area formed by pixels corresponding to each first target pixel is determined, and the area is taken as an initial brightness area.
In this embodiment, the screening the first target pixel from each pixel of the first image and each pixel of the second image according to each luminance difference value includes:
comparing each brightness difference value with a difference value threshold value, and screening out a target brightness difference value which is larger than the difference value threshold value in each brightness difference value; and taking the corresponding pixels of the target brightness difference value in the first image and the second image as first target pixels to obtain first target pixels in the first image and first target pixels in the second image.
In this embodiment, the brightness difference between each pixel in the first image and the corresponding pixel in the second image is determined, and the first target pixel is screened out from each pixel in the first image and each pixel in the second image according to each brightness difference, so as to accurately determine the initial brightness area formed by the corresponding pixel in the image to be processed according to each first target pixel.
In one embodiment, performing a luminance segmentation process on at least one of the first image and the second image to obtain an initial luminance region in the image to be processed, including:
determining a second target pixel of which the brightness value is larger than the brightness threshold value from the pixels of the second image;
And determining an initial brightness area formed by corresponding pixels in the image to be processed according to each second target pixel.
Specifically, the electronic device may obtain a brightness threshold value, and determine a brightness value corresponding to each pixel in the second image. And comparing the brightness value of each pixel in the second image with a brightness threshold value, and screening out a second target pixel with the brightness value larger than the brightness threshold value. And determining pixels corresponding to each second target pixel in the image to be processed, and taking the area formed by the corresponding pixels as an initial brightness area in the image to be processed.
In this embodiment, the exposure time of the first image is longer than the exposure time of the second image, and the second image can highlight more information of the highlight region than the first image, so that from each pixel of the second image, the second target pixel with the brightness value greater than the brightness threshold value of the pixel is determined, and the initial brightness region formed by the corresponding pixel in the image to be processed can be accurately determined according to each second target pixel.
In one embodiment, adjusting the initial luminance region according to the second image and the corresponding first fusion weight map, and the third image and the corresponding second fusion weight map to obtain the target luminance region includes:
Determining a third target pixel with a pixel weight greater than a weight threshold value in each pixel of the second image according to each first pixel weight in the first fusion weight map; determining a third target pixel with a pixel weight greater than a weight threshold value in each pixel of the third image according to each second pixel weight in the second fusion weight map; and adjusting the initial brightness area in the image to be processed according to each third target pixel to obtain the target brightness area in the image to be processed.
Specifically, the electronic device determines a first pixel weight greater than a weight threshold value in each first pixel weight of the first fusion weight map, and takes a pixel corresponding to the first pixel weight greater than the weight threshold value in the second image as a third target pixel. The electronic device determines a second pixel weight greater than a weight threshold value in each second pixel weight of the second fusion weight map, and takes a pixel corresponding to the second pixel weight greater than the weight threshold value in the third image as a third target pixel. And determining corresponding pixels of each third target pixel in the image to be processed, and taking the area formed by the corresponding pixels and the initial brightness area as a target brightness area in the image to be processed. The target luminance region includes a corresponding pixel of each third target pixel in the image to be processed.
In this embodiment, according to the weights of the first pixels in the first fusion weight map, the third target pixels with the weights greater than the weight threshold value in the pixels of the second image are determined, so that the third target pixels meeting the brightness condition in the second image can be screened out. According to each second pixel weight in the second fusion weight graph, determining a third target pixel with a pixel weight larger than a weight threshold value in each pixel of the third image, and screening each third target pixel meeting the brightness condition in the third image, so that an initial brightness region in the image to be processed is adjusted according to each third target pixel meeting the brightness condition, the initial brightness region is enlarged, the pixels meeting the brightness condition can be screened out more accurately based on the pixel weights, the formed target brightness region is more accurate, and the subsequent color correction is more accurate.
In one embodiment, performing a luminance segmentation process on at least one of the first image and the second image to obtain an initial luminance region in the image to be processed, including:
Converting the first image into a first gray scale image and converting the second image into a second gray scale image; and performing brightness segmentation processing on at least one of the first gray level image and the second gray level image to obtain an initial brightness region in the image to be processed.
Specifically, the electronic device generates a first grayscale image of the first image from each pixel of the first image. The electronic device generates a second gray scale image of the second image from each pixel in the second image. Further, the electronic device determines color values of pixels in the first image on the respective color channels, and generates a first gray scale image according to the color values of the pixels on the respective color channels. The electronic device determines color values of pixels in the second image on the color channels respectively, and generates a second gray level image according to the color values of the pixels on the color channels. For example, for each pixel in the first image, the color values of the individual pixels on the respective color channels are averaged to obtain a pixel average value of each pixel, and each pixel average value forms the first gray image. Each color channel includes a red color channel, a green color channel, and a blue color channel.
In this embodiment, performing a luminance segmentation process on at least one of the first gray scale image and the second gray scale image to obtain an initial luminance region in the image to be processed includes: determining a brightness difference between each pixel in the first gray scale image and a corresponding pixel in the second gray scale image; screening out a first target pixel from each pixel of the first gray level image and each pixel of the second gray level image according to each brightness difference value; and determining an initial brightness area formed by corresponding pixels in the image to be processed according to each first target pixel.
In this embodiment, performing a luminance segmentation process on at least one of the first gray scale image and the second gray scale image to obtain an initial luminance region in the image to be processed includes: determining a second target pixel with a brightness value greater than a brightness threshold value from the pixels of the second gray level image; and determining an initial brightness area formed by corresponding pixels in the image to be processed according to each second target pixel.
In this embodiment, the first image is converted into the first gray-scale image, the second image is converted into the second gray-scale image, and the brightness is divided after the conversion into the gray-scale images, so that the calculation amount can be effectively reduced, and the processing efficiency can be improved. And performing brightness segmentation processing on at least one of the first gray level image and the second gray level image to preliminarily determine a brightness region meeting brightness conditions in the image to be processed through brightness segmentation.
In one embodiment, the color correction conditions include at least one of a color enhancement condition and a color reduction condition, and the color correction process includes at least one of a color enhancement process and a color reduction process; the target color region includes at least one of at least one first color region for performing color enhancement and a second color region for performing color reproduction.
Specifically, in the case where at least one first color region in the target luminance region satisfies the color enhancement condition, color correction processing is performed on the at least one first color region, resulting in a target image.
And under the condition that the second color area in the target brightness area meets the color restoration condition, performing color restoration processing on the second color area to obtain a target image.
And under the condition that at least one first color area in the target brightness area meets the color enhancement condition and a second color area in the target brightness area meets the color restoration condition, performing color correction processing on the at least one first color area and performing color restoration processing on the second color area to obtain a target image.
Wherein the first color region is different from the second color region. For example, the first color region is a red region or a blue region, and the second color region is a light blue region.
In this embodiment, the color correction conditions include at least one of a color enhancement condition and a color reduction condition, the color correction process includes at least one of a color enhancement process and a color reduction process, and the target color region includes at least one of a first color region for performing color enhancement and a second color region for performing color reduction, so that in the case where any one of the conditions is satisfied, the color region satisfying the condition is subjected to a corresponding color correction process, and by providing a plurality of color correction conditions and color correction processes, the color correction requirements under different scenes can be effectively satisfied.
In one embodiment, in a case where a target color area in the target luminance area satisfies a color correction condition, performing color correction processing on the target color area to obtain a target image, includes:
Converting the target luminance region from the first color space to the second color space; the second color space is different from the first color space; performing color correction processing on the target luminance area in the second color space in the case where the target luminance area in the second color space satisfies the color correction condition; and converting the target brightness area after the color correction processing from the second color space to the first color space to obtain a target image.
Wherein the first color space and the second color space each correspond to a plurality of color channels, but the second color space is different from the first color space. For example, the first color space may be an RGB color space, and the second color space may be an HSV (Hue, saturation, value) color space. Hue is Hue and represents color information, i.e. the position of the spectral color in which it is located. The Saturation represents the vividness of the color, and is also called purity. Value is brightness, and represents the brightness of the color, i.e., the brightness of the color.
In this embodiment, the second color space is different from the first color space, and only the target brightness region of the image to be processed is converted from the first color space to the second color space, instead of converting the whole image to be processed, so that the amount of calculation of conversion can be reduced, and the processing efficiency can be improved. Under the condition that the target brightness area in the second color space meets the color correction condition, performing color correction processing on the target brightness area in the second color space, and converting the target brightness area after the color correction processing from the second color space to the first color space to obtain a target image, so that the color deviation of the target image is eliminated by the area after the color correction, the color of the obtained target image is more similar to that of an actual scene seen by human eyes, and the color of the image is more real and natural.
In one embodiment, in a case where a target color area in the target luminance area satisfies a color correction condition, performing color correction processing on the target color area to obtain a target image, includes:
Under the condition that at least one first color area in the target brightness area meets the color enhancement condition, performing color enhancement processing on the at least one first color area to obtain a first color area after color enhancement;
and under the condition that the second color area in the target brightness area meets the color restoration condition, performing color restoration processing on the second color area to obtain a color restored second color area so as to obtain the target image.
Specifically, the electronic device determines a first color region in the target luminance region, and determines whether the first color region satisfies a color enhancement condition. And under the condition that at least one first color region meets the color enhancement condition, performing color enhancement processing on each first color region meeting the color enhancement condition to obtain color enhanced first color regions.
The electronic device determines a second color region in the target luminance region and determines whether the second color region satisfies a color reproduction condition. And under the condition that the second color area meets the color reproduction condition, performing color reproduction processing on the second color area to obtain a color reproduction second color area.
The electronic equipment replaces a first color area of a target brightness area in the image to be processed with a first color area after color enhancement, and replaces a second color area of the target brightness area in the image to be processed with a second color area after color restoration, so that the target image is obtained.
In this embodiment, under the condition that at least one first color region in the target brightness region meets the color enhancement condition, color enhancement processing is performed on the at least one first color region to obtain a first color region after color enhancement, and under the condition that a second color region in the target brightness region meets the color reduction condition, color reduction processing is performed on the second color region, so that the condition that white light is bluish can be effectively solved, the original color is reduced, and the color reduced second color region is obtained, so as to obtain the target image.
In one embodiment, the method further comprises: determining the number of first pixels corresponding to the first color in each pixel of at least one first color region of the target brightness region; in the case where the ratio between the number of first pixels and the number of target pixels of each pixel in the target luminance region is greater than a preset ratio, it is determined that at least one first color region satisfies the color enhancement condition.
In this embodiment, when the ratio between the first number of pixels and the target number of pixels in each pixel in the target luminance area is smaller than or equal to the preset ratio, it is determined that the first color area does not satisfy the color enhancement condition, the color enhancement process is not performed on the first color area.
In one embodiment, in a case where at least one first color region in the target brightness region satisfies a color enhancement condition, performing color enhancement processing on the at least one first color region to obtain a color enhanced first color region, including:
And when at least one first color area in the target brightness area meets the color enhancement condition, increasing the saturation of each pixel in the at least one first color area to obtain a first color area after color enhancement.
Specifically, the electronic device determines a first color region in the target luminance region, and determines whether the first color region satisfies a color enhancement condition. And for the first color region meeting the color enhancement condition, the electronic equipment increases the saturation of each pixel in the first color region to obtain the first color region after color enhancement. Further, for the first color region meeting the color enhancement condition, respectively increasing the saturation of each pixel in the first color region according to a preset proportionality coefficient to obtain a color enhanced first color region.
In this embodiment, the first color region is a red region, and when the red region satisfies the color enhancement condition, the saturation of each pixel in the red region is increased.
In this embodiment, the first color region is a blue region, and when the blue region satisfies the color enhancement condition, the saturation of each pixel in the blue region is increased. Further, in the case where the blue region satisfies the color enhancement condition, the saturation of each pixel in the blue region is increased, and the brightness of each pixel in the blue region is increased.
In this embodiment, the saturation represents the vividness of the color, and when at least one first color area in the target brightness area meets the color enhancement condition, the saturation of each pixel in the at least one first color area is increased to achieve the color enhancement of the first color area, so that the color of the first color area is brighter, and the situation that the color distortion or the color is low due to the too bright target brightness area is avoided, so that the color of the first color area is closer to the color visible to human eyes.
As shown in fig. 3, a first image, a second image, and a third image obtained at different exposure times are provided, the exposure time of the first image being longer than the exposure time of the second image, the exposure time of the second image being longer than the exposure time of the third image. As can be seen from fig. 3, images acquired at different exposure durations differ in the saturation of the color, resulting in an unclear image. The color deviation in the first image is greatest, resulting in an unclear font in the image, the color deviation in the second image being less than the color deviation in the first image and greater than the color deviation in the third image. After the color enhancement processing in this embodiment, the saturation of the color can be enhanced, so that the color of the obtained image is consistent with the color seen by human eyes.
In one embodiment, in a case where at least one first color region in the target brightness region satisfies a color enhancement condition, performing color enhancement processing on the at least one first color region to obtain a color enhanced first color region, including:
determining the number of first pixels corresponding to the first color in each pixel of at least one first color region of the target brightness region; determining that at least one first color region satisfies a color enhancement condition when a ratio between the number of first pixels and a target number of pixels of each pixel in the target luminance region is greater than a preset ratio;
and performing color enhancement processing on at least one first color region to obtain a first color region after color enhancement.
Specifically, the electronic device determines a first color region in the target luminance region, and determines a first number of pixels corresponding to the first color among the pixels of the first color region. The electronic device determines a target pixel number of each pixel in the target brightness region, calculates a ratio between the first pixel number and the target pixel number, and determines that the first color region satisfies a color enhancement condition if the ratio is greater than a preset ratio. And performing color enhancement processing on the first color region meeting the color enhancement condition to obtain a color enhanced first color region.
And if the ratio between the first pixel number and the target pixel number of each pixel in the target brightness area is smaller than or equal to the preset ratio, judging that the first color area does not meet the color enhancement condition, and not performing color enhancement processing on the first color area.
In this embodiment, the number of first pixels corresponding to the first color is determined in each pixel of at least one first color region of the target brightness region; under the condition that the ratio between the number of the first pixels and the number of the target pixels of each pixel in the target brightness area is larger than a preset ratio, at least one first color area is judged to meet the color enhancement condition, so that whether the first color area has the color weakening condition or not can be accurately judged based on the ratio of the number of the pixels in the first color area to the total number of the target brightness areas, color enhancement processing is conducted on the color weakening first color area, and the color of the first color area is enabled to be closer to the real color of a photographed scene.
In one embodiment, in a case that a second color area in the target brightness area meets a color reproduction condition, performing color reproduction processing on the second color area to obtain a color-reproduced second color area, including:
And reducing the saturation of each pixel in the second color area to obtain a color restored second color area under the condition that the second color area in the target brightness area meets the color restoration condition.
Specifically, the electronic device determines a second color region in the target luminance region, and determines whether the second color region satisfies a color reproduction condition. And for the second color region meeting the color reproduction condition, the electronic equipment reduces the saturation of each pixel in the second color region to obtain a color-reproduced second color region. Further, for the second color region meeting the color reproduction condition, respectively reducing the saturation of each pixel in the second color region according to a preset proportionality coefficient to obtain a color-reproduced second color region.
In this embodiment, the second color region is a light blue region, and when the light blue region satisfies the color reduction condition, the saturation of each pixel in the light blue region is reduced.
As shown in fig. 4, in the case of night, in the generated image to be processed, a certain area of the target luminance area exhibits a phenomenon of white light bluing, thereby exhibiting light blue, which is indicated by hatching in fig. 4. In this embodiment, when a light blue region is detected and it is determined that color restoration is required in the light blue region, saturation of each pixel in the light blue region is reduced, so that color restoration is achieved, and a target image is obtained.
In this embodiment, the saturation represents the vividness of the color, and when at least one second color area in the target brightness area satisfies the color reproduction condition, the saturation of each pixel in the at least one second color area is reduced to realize color reproduction of the second color area, so that the color of the second color area is reproduced, and the situation that the color is deviated due to the excessively bright target brightness area is avoided, so that the color of the second color area is closer to the color visible to human eyes.
In one embodiment, the method further comprises:
Determining a shooting scene corresponding to the image to be processed according to at least one of shooting parameters of the first image and shooting parameters of at least one frame of intermediate image; in the case where the photographing scene is a preset luminance scene, it is indicated that the second color region in the target luminance region satisfies the color reproduction condition.
The shooting parameters refer to parameters used when shooting images, and may include shooting parameters when shooting a first image and shooting parameters when shooting an intermediate image. The photographing parameters may include at least one of exposure time, sensitivity (International Standards Organization, ISO), and illuminance (luxIndex) controlled by a shutter (shutter). The sensitivity refers to the chemical reaction speed of the film to light, i.e., the sensitivity of the film or film to light. Illuminance (Luminosity) refers to the degree to which an object is illuminated.
Specifically, the electronic device acquires a photographing parameter when photographing the first image, or acquires a photographing parameter when photographing at least one frame of the intermediate image, or acquires a photographing parameter when photographing the first image, and a photographing parameter when photographing at least one frame of the intermediate image. The acquired photographing parameters include at least one of exposure time, sensitivity, and illuminance.
The shooting scene corresponding to the image to be processed refers to the shooting scene when the first image and at least one frame of intermediate image are shot. The shooting scene is, for example, but not limited to, a brighter environment or a darker environment, such as a daytime scene outdoors, a night scene, a darker scene indoors, or the like. The preset brightness scene may be a scene with brighter light, and specifically may be a scene with brightness of light greater than a threshold value.
And judging whether the shooting scene corresponding to the image to be processed is a preset brightness scene or not according to at least one of the exposure time length, the light sensitivity and the illumination. In the case where the photographing scene is not a preset luminance scene, it is indicated that the second color region in the target luminance region does not satisfy the color reproduction condition. In the case where the photographing scene is a preset luminance scene, it is indicated that the second color region in the target luminance region satisfies the color reproduction condition.
In this embodiment, the longer the exposure time period, the greater the sensitivity, and the greater the illuminance means that the light is darker, for example, the longer the exposure time period in a bright environment is than the exposure time period in a dark environment, the greater the sensitivity in a bright environment is than the sensitivity in a dark environment, and the greater the illuminance in a bright environment is than the illuminance in a dark environment.
When the exposure time in the shooting parameters is longer than the preset exposure time, the sensitivity is greater than the preset sensitivity, and the illuminance is greater than the preset illuminance, the shooting scene is judged to be a preset brightness scene.
In this embodiment, according to at least one of the shooting parameters of the first image and the shooting parameters of at least one frame of intermediate image, the shooting scene corresponding to the image to be processed is determined, so that the shooting scene of the image can be accurately determined according to the shooting parameters. In the case where the photographing scene is a preset luminance scene, it is indicated that the second color region in the target luminance region satisfies the color reproduction condition. When an image is shot in a scene with dark light, light filling is often carried out, so that a white light area on the image is in blue-bias condition, and the image shot in the scene with dark light is subjected to color reduction, so that the white light area with blue-bias phenomenon can be effectively reduced, and the white balance correction of the image can be accurately realized.
In one embodiment, the method further comprises:
And carrying out scene recognition on a second color area in the target brightness area, wherein the second color area is represented to meet the color restoration condition under the condition that the second color area does not belong to the preset shooting scene represented by the second color and belongs to the preset brightness scene.
The preset shooting scene refers to a scene presented in a second color, for example, the preset shooting scene is a sky scene, a sea side scene, and the like.
Specifically, the electronic device performs scene recognition on the second color region in the target brightness region to obtain a shooting scene corresponding to the second color region. And under the condition that the shooting scene corresponding to the second color area belongs to the shooting scene preset by the second color presentation, the second color area in the target brightness area does not meet the color reduction condition, and the color reduction processing is not carried out.
In the case where the second color region does not belong to the preset photographing scene presented by the second color and belongs to the preset luminance scene, it is indicated that the second color region in the target luminance region satisfies the color reproduction condition. For example, the second color region is not a sky scene, and is a night scene, it indicates that the second color region in the target luminance region satisfies the color reproduction condition.
In this embodiment, scene recognition is performed on the second color region in the target brightness region, and when the second color region does not belong to the preset shooting scene represented by the second color and belongs to the preset brightness scene, it is indicated that the second color region in the target brightness region meets the color restoration condition, so that the judgment on the second color region can be more accurate according to the preset shooting scene and the preset brightness scene together as the judgment condition.
In one embodiment, the method further comprises:
Determining a third target pixel with a pixel weight greater than a weight threshold value in each pixel of at least one frame of intermediate image; in the case where there are a preset number of pixels in the second color region corresponding to each third target pixel, it is indicated that the second color region in the target luminance region satisfies the color reproduction condition.
Specifically, the electronic device may acquire a fusion weight map corresponding to at least one frame of intermediate image, determine a target pixel weight greater than a weight threshold among pixel weights included in the fusion weight map, and determine a third target pixel corresponding to the target pixel weight in the at least one frame of intermediate image. In the second color area, a pixel corresponding to the third target pixel is found, and the number of pixels corresponding to the third target pixel that are present is determined. And when the number reaches the preset number, indicating that at least the preset number of pixels in the second color area come from the intermediate image, and indicating that the second color area in the target brightness area meets the color reduction condition.
In this embodiment, a third target pixel, where the weight of the pixel is greater than the weight threshold, in each pixel of at least one frame of intermediate image is determined, and when a preset number of pixels in the second color region correspond to each third target pixel, it is indicated that at least the preset number of pixels in the second color region come from the intermediate image. The exposure time of the intermediate image is shorter than that of the first image, the intermediate image belongs to a dark frame, the fusion weight graph of the intermediate image can reflect the distribution of pixels with larger brightness values, the pixels with larger brightness values are easy to generate white light blue, the color of the second color region is deviated from that of a real scene, the second color region needs to be subjected to color restoration, and therefore whether the second color region meets color restoration conditions can be accurately judged.
In one embodiment, an image processing method is provided, applied to an electronic device, including:
Acquiring a first image, a second image and a third image, and synthesizing an image to be processed based on the first image, the second image and the third image; the exposure time of the first image is longer than the exposure time of the second image, and the exposure time of the second image is longer than the exposure time of the third image.
Then, the first image is converted into a first gray scale image, and the second image is converted into a second gray scale image.
Further, determining a brightness difference between each pixel in the first gray scale image and a corresponding pixel in the second gray scale image; screening out a first target pixel from each pixel of the first gray level image and each pixel of the second gray level image according to each brightness difference value; and determining an initial brightness area formed by corresponding pixels in the image to be processed according to each first target pixel.
Optionally, determining a second target pixel with a pixel luminance value greater than a luminance threshold from the pixels of the second gray scale image; and determining an initial brightness area formed by corresponding pixels in the image to be processed according to each second target pixel.
Then, a first fusion weight map corresponding to the second image and a second fusion weight map corresponding to the third image are obtained; the first fused weight map includes first pixel weights for each pixel in the second image and the second fused weight map includes second pixel weights for each pixel in the third image.
Further, according to the weight of each first pixel in the first fusion weight graph, determining a third target pixel with the weight greater than a weight threshold value in each pixel of the second gray level image; and determining a third target pixel with the pixel weight larger than the weight threshold value in each pixel of the third gray image according to each second pixel weight in the second fusion weight graph.
Further, an initial brightness region in the image to be processed is adjusted according to each third target pixel, and morphological processing is carried out on the adjusted brightness region to obtain a target brightness region in the image to be processed.
Then, converting the target luminance region from the first color space to the second color space in a case where at least one first color region of the target luminance region satisfies the color enhancement condition; the second color space is different from the first color space; and performing color enhancement processing on at least one first color region of the target brightness region in the second color space, and converting the target brightness region obtained by color enhancement from the second color space to the first color space to obtain a target image.
Optionally, converting the target luminance region from the first color space to the second color space in the case where the second color region of the target luminance region satisfies the color reproduction condition; and performing color correction processing on a second color region of the target brightness region in the second color space, and converting the target brightness region obtained by color correction from the second color space to the first color space to obtain a target image.
In one embodiment, as shown in fig. 5, an image Sensor (CMOS Sensor) captures and obtains RAW data of multiple frames of different exposures according to a preset exposure strategy, that is, a first image EV0, a second image EV-and a third image ev—; each image is converted into an RGB image after black level correction, lens shading correction, white balance correction and demosaicing in an Image Signal Processor (ISP); HDR synthesis is performed based on a plurality of RGB images, and an image to be processed, namely a high dynamic range image, is obtained. And determining a target light area in the synthesis process, transferring to an HSV color space for at least coincidence in color enhancement and color restoration, and then converting to an RGB color space to output a final synthesis result, namely a target image. The target light area is the target brightness area.
In this embodiment, the process flow of performing color enhancement or color restoration on the target highlight region is shown in fig. 6.
Step 602, acquiring RGB images with different exposure durations, including a first image EV0, a second image EV-and a third image EV-. Where EV0 is a long exposure frame, EV-is a medium exposure frame, EV-is a short exposure darkest frame.
Step 604, performing HDR synthesis processing on the multiple frames of RGB images, including performing image alignment on the multiple frames of RGB images, calculating a fusion weight by using an exposure fusion algorithm, and performing image weighted fusion based on fusion weight maps (fusion WEIGHT MAP) corresponding to the multiple frames of RGB images, so as to obtain a synthesized high dynamic range image (i.e., an HDR image), where the high dynamic range image is an RGB image, and the high dynamic range image is an image to be processed. And simultaneously, a first fusion weight map corresponding to the second image EV-and a second fusion weight map corresponding to the third image EV-are obtained.
Step 606, converting the first image EV0 and the second image EV-into 8bit gray scale images, respectively, to obtain a first gray scale image corresponding to EV0 and a second gray scale image corresponding to EV-. The highlight region detected based on the long-exposure frame and the medium-exposure frame can be covered to the highlight region in the short-exposure frame, so that detection can be performed without combining the short-exposure frames.
In step 608, the first gray scale image and the second gray scale image are luminance segmented to detect an initial highlight region in the HDR image. Specifically, an appropriate difference threshold luma_diff_th, which is a judgment threshold for the luminance difference (i.e., gray difference) between a long-exposure frame and a middle-exposure frame, and a luminance threshold mid_highlight_th, which is a screening threshold for the highlight pixels of the middle-exposure frame, are set. The area formed by the pixels whose luminance difference between the long exposure frame and the middle exposure frame is greater than luma_diff_th is detected as an initial highlight area Map, or the area formed by the pixels whose luminance of the middle exposure frame exceeds mid_highlight_th is detected as an initial highlight area Map.
Step 610, adding pixels with pixel weights higher than the weight threshold in the HDR image to the initial highlight region in combination with the first fusion weight Map and the second fusion weight Map determined in the HDR synthesis process of step 604, to obtain the adjusted highlight region Map. Usually the dark frames all contribute to the highlight region information, so the fused weight map of EV-and EV-reflects to some extent the distribution of the highlight regions in the scene.
In step 612, the adjusted highlight region Map is subjected to morphological processing, which mainly includes related filtering expansion, corrosion, communication and the like, so that the adjusted highlight region Map is expanded to a certain extent, and a final HIGHLIGHT MAP, namely, a target highlight region in the HDR image is obtained.
Step 614, the target highlight region in the HDR image is transferred from the RGB color space to the color HSV space, and the total number of pixels in the target highlight region is counted. The color space conversion is performed only in the target highlight region, and the color space conversion is not performed on the complete HDR image in order to reduce the conversion calculation amount.
In step 616, red (red_map) and blue (blue_map) regions with saturation greater than a preset threshold are respectively separated according to the H (hue) and S (saturation) components of the target light region, and the number of red_count and blue_count of the screened red pixels are counted.
In step 618, it is determined whether color enhancement is performed.
In step 620, if the ratio of red_count/highlight_count is greater than the predetermined ratio, the red color is enhanced, i.e. the red color is enhanced, and if the ratio is smaller than the predetermined ratio, the color is not enhanced. If the ratio of blue_count/highlight_count is larger than a preset ratio, blue is enhanced, and if the ratio is smaller than the preset ratio, color enhancement is not performed.
The specific color enhancement treatment is as follows: and performing image morphology processing on the red area map, obtaining a new red_map after expansion corrosion, and expanding the S component of the pixel value in the red area obtained after the expansion corrosion according to a coefficient to enhance the saturation, so as to obtain the red area after color enhancement. And performing image morphology processing on the blue region map, obtaining a new blue_map after expansion corrosion, and expanding pixel value H, S components in the blue region obtained after the expansion corrosion according to a preset scaling factor to obtain a blue region after color enhancement. And a scene test is combined to obtain a preset proportionality coefficient, so that the color and the saturation are more real.
In step 622, a bluish region (slight _blue_map) with saturation less than a preset threshold is detected based on the H (hue) and S (saturation) components, and the number of bluish pixels is counted.
In step 624, it is determined whether to push the pixels in the light blue region to white light, i.e. whether to perform color reproduction, also referred to as white light reproduction, on the light blue region. Here, in order to mainly correct the problem of white light bluing caused by the white balance inaccuracy of the dark frame image, it is necessary to avoid erroneous judgment, for example, the light blue sky area is not required to be color-restored, and the light blue sky area is color-restored due to erroneous judgment. The basis for judging whether to carry out color reproduction on the light blue area is as follows: (1) Judging according to the exposure time, the sensitivity ISO and the illuminance luxIndex controlled by the shutter in the shooting meta parameter, or judging that the shooting scene is outdoors in daytime, and not performing color restoration; (2) In some shooting modes, scene detection can be performed, whether a sky scene is detected directly or not can be detected, and if the sky scene is judged, and a mask map of the sky scene and a light blue area map are overlapped to a certain extent, color restoration is not performed; if the sky scene cannot be directly detected, judging whether the sky scene is a dark environment or not, if the sky scene is a bright environment, not performing color restoration, and performing color restoration processing for dark environments such as night scenes if the proportion of the number of pixels in the light blue area to the total number of pixels in the total target light area is higher than a certain threshold value, indicating that the sky scene is possible; (3) And checking whether the pixels in the light blue area mainly come from the second image and the third image or not according to the first fusion weight map and the second fusion weight map, if so, performing color reduction processing, namely determining the pixel weight of corresponding pixels in the dark frame image based on the pixels in the light blue area, and if more pixels in the light blue area exist and the pixel weight of the corresponding pixels is greater than a certain weight threshold, indicating that the pixels in the light blue area mainly come from the dark frame, and if so, performing white light reduction processing.
In step 626, after it is determined in step 624 that color reproduction is required, the saturation of the pixels in the bluish region map is reduced by an appropriate factor, i.e., the S component value is reduced, to obtain a bluish region after color reproduction.
Step 628, converting the target highlight region after completing the color enhancement and color reproduction from the HSV color space back to the RGB color space, resulting in a target image.
In this embodiment, a method for enhancing the color of a highlight in an HDR image and restoring the white light is provided, which mainly includes accurately detecting the highlight, calculating the chroma information of a multi-frame synthesized result in the highlight, judging whether to perform color enhancement and white light restoration, detecting the waiting adjustment areas of a red area, a blue area and a light blue area, and performing color enhancement on pixels in the red area and the blue area meeting the conditions, so as to solve the problem that the color of the HDR synthesized image in the highlight area such as a billboard is light.
The light blue light area formed by the pixels from the dark frames is pushed to white light, white light restoration is realized, and the problem that white light is bluish in an image area formed by the pixels from the darkest frames in the fused image due to inaccurate white balance is corrected, so that the color expressive force and accuracy of a final synthesized result image are improved, the color of the synthesized image is accurate, the saturation is improved, and the image expressive force of HDR photographing processing is improved.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an image processing device for realizing the above-mentioned image processing method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation of one or more embodiments of the image processing apparatus provided below may refer to the limitation of the image processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 7, there is provided an image processing apparatus including:
The acquiring module 702 is configured to acquire an image to be processed, where the image to be processed is generated based on a first image and at least one frame of intermediate image, and an exposure time period of the first image is longer than an exposure time period of the intermediate image.
And the brightness segmentation module 704 is configured to perform brightness segmentation processing on at least one of the first image and at least one frame of intermediate image, so as to obtain a target brightness region in the image to be processed.
And the color correction module 706 is configured to perform color correction processing on the target color area to obtain the target image when the target color area in the target brightness area meets the color correction condition.
In this embodiment, by acquiring an image to be processed, the image to be processed is generated based on a first image and at least one frame of intermediate image, the exposure time of the first image is longer than the exposure time of the intermediate image, and at least one of the first image and the at least one frame of intermediate image is subjected to brightness segmentation processing, so as to obtain a target brightness region in the image to be processed, and an image region in which color deviation is likely to exist can be determined based on brightness. Under the condition that a target color area in the target brightness area meets the color correction condition, the color correction processing is performed on the target color area when the color deviation exists in the target color area, and the color correction can be accurately performed on the area with the color deviation in the image, so that the color of the obtained target image is more consistent with that of an actual scene seen by human eyes.
In one embodiment, the at least one frame of intermediate image includes a second image and a third image, the second image having an exposure time longer than an exposure time of the third image; the brightness segmentation module 704 is further configured to perform brightness segmentation processing on at least one of the first image and the second image, so as to obtain an initial brightness region in the image to be processed; according to the second image and the corresponding first fusion weight diagram, and the third image and the corresponding second fusion weight diagram, the initial brightness area is adjusted, and a target brightness area is obtained; the first fused weight map includes first pixel weights for each pixel in the second image and the second fused weight map includes second pixel weights for each pixel in the third image.
In this embodiment, at least one of the first image and the second image is subjected to brightness segmentation processing to obtain an initial brightness region in the image to be processed, so as to primarily determine a brightness region meeting a brightness condition in the image to be processed. The exposure time length of the second image is longer than that of the third image, the second image and the third image both belong to dark frame images, the first fusion weight map comprises first pixel weights of all pixels in the second image, the second fusion weight map comprises second pixel weights of all pixels in the third image, and then the first fusion weight map and the second fusion weight map can reflect pixel distribution of a brighter area in the images. And adjusting the initial brightness region according to the second image and the corresponding first fusion weight map and the third image and the corresponding second fusion weight map, so that the target brightness region in the image to be processed can be more accurately determined.
In one embodiment, the luminance segmentation module 704 is further configured to determine a luminance difference between each pixel in the first image and a corresponding pixel in the second image; screening out a first target pixel from each pixel of the first image and each pixel of the second image according to each brightness difference value; and determining an initial brightness area formed by corresponding pixels in the image to be processed according to each first target pixel.
In this embodiment, the brightness difference between each pixel in the first image and the corresponding pixel in the second image is determined, and the first target pixel is screened out from each pixel in the first image and each pixel in the second image according to each brightness difference, so as to accurately determine the initial brightness area formed by the corresponding pixel in the image to be processed according to each first target pixel.
In one embodiment, the luminance segmentation module 704 is further configured to determine, from each pixel of the second image, a second target pixel having a luminance value greater than a luminance threshold; and determining an initial brightness area formed by corresponding pixels in the image to be processed according to each second target pixel.
In this embodiment, the exposure time of the first image is longer than the exposure time of the second image, and the second image can highlight more information of the highlight region than the first image, so that from each pixel of the second image, the second target pixel with the brightness value greater than the brightness threshold value of the pixel is determined, and the initial brightness region formed by the corresponding pixel in the image to be processed can be accurately determined according to each second target pixel.
In one embodiment, the luminance segmentation module 704 is further configured to determine, according to each first pixel weight in the first fused weight map, a third target pixel in each pixel of the second image, where the pixel weight is greater than the weight threshold; determining a third target pixel with a pixel weight greater than a weight threshold value in each pixel of the third image according to each second pixel weight in the second fusion weight map; and adjusting the initial brightness area in the image to be processed according to each third target pixel to obtain the target brightness area in the image to be processed.
In this embodiment, according to the weights of the first pixels in the first fusion weight map, the third target pixels with the weights greater than the weight threshold value in the pixels of the second image are determined, so that the third target pixels meeting the brightness condition in the second image can be screened out. According to each second pixel weight in the second fusion weight graph, determining a third target pixel with a pixel weight larger than a weight threshold value in each pixel of the third image, and screening each third target pixel meeting the brightness condition in the third image, so that an initial brightness region in the image to be processed is adjusted according to each third target pixel meeting the brightness condition, the initial brightness region is enlarged, the pixels meeting the brightness condition can be screened out more accurately based on the pixel weights, the formed target brightness region is more accurate, and the subsequent color correction is more accurate.
In one embodiment, the luminance segmentation module 704 is further configured to convert the first image into a first gray scale image and convert the second image into a second gray scale image; and performing brightness segmentation processing on at least one of the first gray level image and the second gray level image to obtain an initial brightness region in the image to be processed.
In this embodiment, the first image is converted into the first gray-scale image, the second image is converted into the second gray-scale image, and the brightness is divided after the conversion into the gray-scale images, so that the calculation amount can be effectively reduced, and the processing efficiency can be improved. And performing brightness segmentation processing on at least one of the first gray level image and the second gray level image to preliminarily determine a brightness region meeting brightness conditions in the image to be processed through brightness segmentation.
In one embodiment, the color correction conditions include at least one of a color enhancement condition and a color reduction condition, and the color correction process includes at least one of a color enhancement process and a color reduction process; the target color region includes at least one of at least one first color region for performing color enhancement and a second color region for performing color reproduction.
In this embodiment, the color correction conditions include at least one of a color enhancement condition and a color reduction condition, the color correction process includes at least one of a color enhancement process and a color reduction process, and the target color region includes at least one of a first color region for performing color enhancement and a second color region for performing color reduction, so that in the case where any one of the conditions is satisfied, the color region satisfying the condition is subjected to a corresponding color correction process, and by providing a plurality of color correction conditions and color correction processes, the color correction requirements under different scenes can be effectively satisfied.
In one embodiment, the color correction module 706 is further configured to convert the target luminance region from a first color space to a second color space; the second color space is different from the first color space; performing color correction processing on the target luminance area in the second color space in the case where the target luminance area in the second color space satisfies the color correction condition; and converting the target brightness area after the color correction processing from the second color space to the first color space to obtain a target image.
In this embodiment, the second color space is different from the first color space, and only the target brightness region of the image to be processed is converted from the first color space to the second color space, instead of converting the whole image to be processed, so that the amount of calculation of conversion can be reduced, and the processing efficiency can be improved. Under the condition that the target brightness area in the second color space meets the color correction condition, performing color correction processing on the target brightness area in the second color space, and converting the target brightness area after the color correction processing from the second color space to the first color space to obtain a target image, so that the color deviation of the target image is eliminated by the area after the color correction, the color of the obtained target image is more similar to that of an actual scene seen by human eyes, and the color of the image is more real and natural.
In one embodiment, the color correction module 706 is further configured to perform color enhancement processing on at least one first color region in the target brightness region to obtain a color enhanced first color region if the at least one first color region meets a color enhancement condition; and under the condition that the second color area in the target brightness area meets the color restoration condition, performing color restoration processing on the second color area to obtain a color restored second color area so as to obtain the target image.
In this embodiment, under the condition that at least one first color region in the target brightness region meets the color enhancement condition, color enhancement processing is performed on the at least one first color region to obtain a first color region after color enhancement, and under the condition that a second color region in the target brightness region meets the color reduction condition, color reduction processing is performed on the second color region, so that the condition that white light is bluish can be effectively solved, the original color is reduced, and the color reduced second color region is obtained, so as to obtain the target image.
In one embodiment, the color correction module 706 is further configured to, in a case where at least one first color region in the target brightness region meets the color enhancement condition, increase saturation of each pixel in the at least one first color region, and obtain a first color region after color enhancement.
In this embodiment, the saturation represents the vividness of the color, and when at least one first color area in the target brightness area meets the color enhancement condition, the saturation of each pixel in the at least one first color area is increased to achieve the color enhancement of the first color area, so that the color of the first color area is brighter, and the situation that the color distortion or the color is low due to the too bright target brightness area is avoided, so that the color of the first color area is closer to the color visible to human eyes.
In one embodiment, the color correction module 706 is further configured to determine a first number of pixels corresponding to the first color among the pixels of the at least one first color region of the target brightness region; determining that at least one first color region satisfies a color enhancement condition when a ratio between the number of first pixels and a target number of pixels of each pixel in the target luminance region is greater than a preset ratio; and performing color enhancement processing on at least one first color region to obtain a first color region after color enhancement. In this embodiment, the number of first pixels corresponding to the first color is determined in each pixel of at least one first color region of the target brightness region; under the condition that the ratio between the number of the first pixels and the number of the target pixels of each pixel in the target brightness area is larger than a preset ratio, at least one first color area is judged to meet the color enhancement condition, so that whether the first color area has the color weakening condition or not can be accurately judged based on the ratio of the number of the pixels in the first color area to the total number of the target brightness areas, color enhancement processing is conducted on the color weakening first color area, and the color of the first color area is enabled to be closer to the real color of a photographed scene.
In one embodiment, the color correction module 706 is further configured to reduce the saturation of each pixel in the second color area to obtain a color restored second color area if the second color area in the target brightness area meets the color restoration condition.
In this embodiment, the saturation represents the vividness of the color, and when at least one second color area in the target brightness area satisfies the color reproduction condition, the saturation of each pixel in the at least one second color area is reduced to realize color reproduction of the second color area, so that the color of the second color area is reproduced, and the situation that the color is deviated due to the excessively bright target brightness area is avoided, so that the color of the second color area is closer to the color visible to human eyes.
In one embodiment, the method further comprises a judging module; the judging module is used for determining shooting scenes corresponding to the intermediate images according to shooting parameters corresponding to the intermediate images; in the case where the photographing scene is a preset luminance scene, it is indicated that the second color region in the target luminance region satisfies the color reproduction condition.
In this embodiment, according to at least one of the shooting parameters of the first image and the shooting parameters of at least one frame of intermediate image, the shooting scene corresponding to the image to be processed is determined, so that the shooting scene of the image can be accurately determined according to the shooting parameters. In the case where the photographing scene is a preset luminance scene, it is indicated that the second color region in the target luminance region satisfies the color reproduction condition. When an image is shot in a scene with dark light, light filling is often carried out, so that a white light area on the image is in blue-bias condition, and the image shot in the scene with dark light is subjected to color reduction, so that the white light area with blue-bias phenomenon can be effectively reduced, and the white balance correction of the image can be accurately realized.
In one embodiment, the determining module is further configured to perform scene recognition on a second color region in the target brightness region, where the second color region does not belong to a preset shooting scene represented by the second color and belongs to the preset brightness scene, and indicates that the second color region in the target brightness region meets a color restoration condition.
In this embodiment, scene recognition is performed on the second color region in the target brightness region, and when the second color region does not belong to the preset shooting scene represented by the second color and belongs to the preset brightness scene, it is indicated that the second color region in the target brightness region meets the color restoration condition, so that the judgment on the second color region can be more accurate according to the preset shooting scene and the preset brightness scene together as the judgment condition.
In one embodiment, the determining module is further configured to determine a third target pixel having a pixel weight greater than the weight threshold among the pixels of the at least one frame of intermediate image; in the case where there are a preset number of pixels in the second color region corresponding to each third target pixel, it is indicated that the second color region in the target luminance region satisfies the color reproduction condition.
In this embodiment, a third target pixel, where the weight of the pixel is greater than the weight threshold, in each pixel of at least one frame of intermediate image is determined, and when a preset number of pixels in the second color region correspond to each third target pixel, it is indicated that at least the preset number of pixels in the second color region come from the intermediate image. The exposure time of the intermediate image is shorter than that of the first image, the intermediate image belongs to a dark frame, the fusion weight graph of the intermediate image can reflect the distribution of pixels with larger brightness values, the pixels with larger brightness values are easy to generate white light blue, the color of the second color region is deviated from that of a real scene, the second color region needs to be subjected to color restoration, and therefore whether the second color region meets color restoration conditions can be accurately judged.
The respective modules in the above-described image processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, an electronic device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 8. The electronic device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the electronic device is used to exchange information between the processor and the external device. The communication interface of the electronic device is used for conducting wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display unit of the electronic device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the electronic equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the electronic device to which the present inventive arrangements are applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform steps of an image processing method.
Embodiments of the present application also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform an image processing method.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.
Claims (19)
1. An image processing method, comprising:
Acquiring an image to be processed, wherein the image to be processed is generated based on a first image and at least one frame of intermediate image, and the exposure time of the first image is longer than the exposure time of the intermediate image;
Performing brightness segmentation processing on at least one of the first image and at least one frame of the intermediate image to obtain a target brightness region in the image to be processed;
And under the condition that a target color area in the target brightness area meets the color correction condition, performing color correction processing on the target color area to obtain a target image.
2. The method of claim 1, wherein the at least one frame of intermediate images includes a second image and a third image, the second image having a longer exposure time than the third image; the performing brightness segmentation processing on at least one of the first image and the at least one frame of the intermediate image to obtain a target brightness region in the image to be processed includes:
Performing brightness segmentation processing on at least one of the first image and the second image to obtain an initial brightness region in the image to be processed;
According to the second image and the corresponding first fusion weight diagram and the third image and the corresponding second fusion weight diagram, the initial brightness area is adjusted, and a target brightness area is obtained; the first fusion weight map includes a first pixel weight for each pixel in the second image, and the second fusion weight map includes a second pixel weight for each pixel in the third image.
3. The method according to claim 2, wherein the performing the luminance segmentation process on at least one of the first image and the second image to obtain the initial luminance region in the image to be processed includes:
determining a luminance difference between each pixel in the first image and a corresponding pixel in the second image;
Screening out a first target pixel from each pixel of the first image and each pixel of the second image according to each brightness difference value;
and determining an initial brightness area formed by corresponding pixels in the image to be processed according to each first target pixel.
4. The method according to claim 2, wherein the performing the luminance segmentation process on at least one of the first image and the second image to obtain the initial luminance region in the image to be processed includes:
determining a second target pixel with a brightness value greater than a brightness threshold value from pixels of the second image;
And determining an initial brightness area formed by corresponding pixels in the image to be processed according to each second target pixel.
5. The method of claim 2, wherein adjusting the initial luminance region to obtain a target luminance region according to the second image and the corresponding first fusion weight map, and the third image and the corresponding second fusion weight map comprises:
determining a third target pixel with a pixel weight greater than a weight threshold value in each pixel of the second image according to each first pixel weight in the first fusion weight map;
Determining a third target pixel with a pixel weight greater than the weight threshold value in each pixel of the third image according to each second pixel weight in the second fusion weight map;
And adjusting the initial brightness area in the image to be processed according to each third target pixel to obtain the target brightness area in the image to be processed.
6. The method according to claim 2, wherein the performing the luminance segmentation process on at least one of the first image and the second image to obtain the initial luminance region in the image to be processed includes:
Converting the first image into a first gray scale image and converting the second image into a second gray scale image;
And performing brightness segmentation processing on at least one of the first gray level image and the second gray level image to obtain an initial brightness region in the image to be processed.
7. The method of claim 1, wherein the color correction conditions include at least one of color enhancement conditions and color reduction conditions, and wherein the color correction process includes at least one of color enhancement processes and color reduction processes; the target color region includes at least one of at least one first color region for performing color enhancement and a second color region for performing color restoration.
8. The method according to claim 7, wherein, in the case where the target color area in the target luminance area satisfies a color correction condition, performing color correction processing on the target color area to obtain a target image, comprises:
Converting the target luminance region from a first color space to a second color space; the second color space is different from the first color space;
performing color correction processing on the target luminance region in the second color space in a case where the target luminance region in the second color space satisfies the color correction condition;
and converting the target brightness area after the color correction processing from the second color space to the first color space to obtain a target image.
9. The method according to claim 7, wherein, in the case where the target color area in the target luminance area satisfies a color correction condition, performing color correction processing on the target color area to obtain a target image, comprises:
under the condition that at least one first color area in the target brightness area meets the color enhancement condition, performing color enhancement processing on the at least one first color area to obtain a first color area after color enhancement;
And under the condition that the second color area in the target brightness area meets the color reproduction condition, performing color reproduction processing on the second color area to obtain a color reproduction second color area so as to obtain a target image.
10. The method according to claim 9, wherein, in the case where at least one first color region in the target luminance region satisfies a color enhancement condition, performing color enhancement processing on the at least one first color region to obtain a color enhanced first color region, including:
And increasing the saturation of each pixel in at least one first color area in the target brightness area to obtain the first color area after color enhancement when the at least one first color area in the target brightness area meets the color enhancement condition.
11. The method according to claim 9, wherein, in the case where at least one first color region in the target luminance region satisfies a color enhancement condition, performing color enhancement processing on the at least one first color region to obtain a color enhanced first color region, including:
determining the first pixel quantity corresponding to the first color in each pixel of at least one first color area of the target brightness area;
Determining that the at least one first color region satisfies a color enhancement condition when a ratio between the first number of pixels and a target number of pixels of each pixel in the target luminance region is greater than a preset ratio;
and performing color enhancement processing on the at least one first color region to obtain a first color region after color enhancement.
12. The method according to claim 9, wherein, in the case where the second color region in the target luminance region satisfies a color reproduction condition, performing color reproduction processing on the second color region to obtain a color-reproduced second color region, includes:
and reducing the saturation of each pixel in the second color area to obtain a color restored second color area under the condition that the second color area in the target brightness area meets the color restoration condition.
13. The method according to any one of claims 7 to 12, further comprising:
Determining a shooting scene corresponding to the image to be processed according to at least one of shooting parameters of the first image and shooting parameters of at least one frame of the intermediate image;
And under the condition that the shooting scene is a preset brightness scene, the second color area in the target brightness area is indicated to meet the color restoration condition.
14. The method according to any one of claims 7 to 12, further comprising:
And carrying out scene recognition on a second color region in the target brightness region, wherein the second color region is represented to meet a color reduction condition when the second color region does not belong to a preset shooting scene represented by the second color and belongs to a preset brightness scene.
15. The method according to any one of claims 7 to 12, further comprising:
determining a third target pixel with a pixel weight greater than a weight threshold value in each pixel of at least one frame of the intermediate image;
in the case where there are a preset number of pixels in the second color region corresponding to each third target pixel, it is indicated that the second color region in the target luminance region satisfies the color reproduction condition.
16. An image processing apparatus, comprising:
The device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image to be processed, the image to be processed is generated based on a first image and at least one frame of intermediate image, and the exposure time of the first image is longer than the exposure time of the intermediate image;
The brightness segmentation module is used for carrying out brightness segmentation processing on at least one of the first image and at least one frame of the intermediate image to obtain a target brightness area in the image to be processed;
And the color correction module is used for carrying out color correction processing on the target color area to obtain a target image under the condition that the target color area in the target brightness area meets the color correction condition.
17. An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 15.
18. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 15.
19. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the method of any one of claims 1 to 15.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211455715.7A CN118071658A (en) | 2022-11-21 | 2022-11-21 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211455715.7A CN118071658A (en) | 2022-11-21 | 2022-11-21 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118071658A true CN118071658A (en) | 2024-05-24 |
Family
ID=91102637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211455715.7A Pending CN118071658A (en) | 2022-11-21 | 2022-11-21 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118071658A (en) |
-
2022
- 2022-11-21 CN CN202211455715.7A patent/CN118071658A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10021313B1 (en) | Image adjustment techniques for multiple-frame images | |
US20200045219A1 (en) | Control method, control apparatus, imaging device, and electronic device | |
US20130286236A1 (en) | System and method of adjusting camera image data | |
CN110246101B (en) | Image processing method and device | |
US20120127336A1 (en) | Imaging apparatus, imaging method and computer program | |
US11601600B2 (en) | Control method and electronic device | |
WO2020034702A1 (en) | Control method, device, electronic equipment and computer readable storage medium | |
JP5804857B2 (en) | Image processing apparatus, image processing method, and program | |
CN116416122B (en) | Image processing method and related device | |
CN116744120B (en) | Image processing method and electronic device | |
CN111970432A (en) | Image processing method and image processing device | |
CN110706162A (en) | Image processing method and device and computer storage medium | |
CN107682611B (en) | Focusing method and device, computer readable storage medium and electronic equipment | |
CN117135293B (en) | Image processing method and electronic device | |
CN114945087B (en) | Image processing method, device, equipment and storage medium based on face characteristics | |
CN115205168A (en) | Image processing method, device, electronic equipment, storage medium and product | |
CN115731143A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN118071658A (en) | Image processing method, apparatus, electronic device, and computer-readable storage medium | |
CN109447925B (en) | Image processing method and device, storage medium and electronic equipment | |
CN114697483A (en) | Device and method for shooting under screen based on compressed sensing white balance algorithm | |
CN117408927B (en) | Image processing method, device and storage medium | |
CN117710264B (en) | Dynamic range calibration method of image and electronic equipment | |
CN118101854A (en) | Image processing method, apparatus, electronic device, and computer-readable storage medium | |
CN114998177A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
US20230017498A1 (en) | Flexible region of interest color processing for cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |