Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Aiming at the problem of color anomaly of a highlight region of a fused image in the existing HDR algorithm, the embodiment of the invention provides an image fusion method, an image fusion device and an electronic system, which relieve the problem of color cast in the highlight region of the fused image in the existing method, ensure the vivid effect of the fused image and improve the practical value of the fused image.
To facilitate understanding of the present embodiment, a detailed description will be given below of an image fusion method provided in an embodiment of the present invention.
First, refer to the schematic structure of the electronic system shown in fig. 1. The electronic system can be used for realizing the image fusion method, the image fusion device and the electronic system of the embodiment of the invention.
As shown in FIG. 1, an electronic system 100 includes one or more processing devices 102, one or more memory devices 104, an input device 106, an output device 108, and one or more image capture devices 110, which are interconnected via a bus system 112 and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic system 100 shown in fig. 1 are exemplary only, and not limiting, and that the electronic system may have other components and structures as desired.
The processing device 102 may be a server, a smart terminal, or a device containing a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, may process data for other components in the electronic system 100, and may control other components in the electronic system 100 to perform image fusion functions.
Storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer-readable storage medium and executed by processing device 102 to implement the functions of image fusion in embodiments of the invention (implemented by the processing device) described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
Image capture device 110 may capture multiple images of the same scene and store the captured multiple images in storage 104 for use by other components.
For example, the devices used for implementing the image fusion method, apparatus and electronic system according to the embodiments of the present invention may be integrally disposed, or may be disposed in a decentralized manner, such as integrally disposing the processing device 102, the storage device 104, the input device 106 and the output device 108, and disposing the image capturing device 110 at a designated position where an image can be captured. When the above-described devices in the electronic system are integrally provided, the electronic system may be implemented as an intelligent terminal such as a camera, a smart phone, a tablet computer, a vehicle-mounted terminal, and the like.
The embodiment of the invention provides an image fusion method, wherein an execution main body of the method is electronic equipment, the electronic equipment may be pre-stored with corresponding relations between a plurality of standard color temperature intervals and color channel parameters respectively, or may be used for acquiring the corresponding relations between the plurality of standard color temperature intervals and the color channel parameters from other storage devices, and the corresponding relations can be set according to actual conditions. The color channels comprise an R channel (namely a red channel), a B channel (namely a blue channel) and a G channel (namely a green channel), in practical application, the G channel is generally used as a denominator, R/G values and B/G values are counted, other channel ratios can be used, for example, the B channel is used as the denominator, R/B values and G/B values are counted, and the setting can be specifically carried out according to actual conditions. For ease of understanding, the color channel parameters including the red-green channel ratio (i.e., R/G value) and the blue-green channel ratio (i.e., B/G value) are exemplified herein. Table 1 shows a corresponding relationship between a standard color temperature interval and a color channel, where each color temperature interval may have a superimposed color temperature interval or may not have a superimposed color temperature interval:
TABLE 1
Based on the above correspondence, a flowchart of the image fusion method provided in the embodiment of the present invention is shown in fig. 2, and the method includes the following steps:
step S202, carrying out registration alignment on a plurality of images of the same scene to obtain a reference brightness image and at least one under-exposed image which are spatially aligned; the EV corresponding to the reference luminance image is different from the EV of the underexposed image.
Specifically, a plurality of images of the same scene are acquired through a camera, and the images are registered and aligned, so that the frame of each image is aligned with the frame; wherein, the registration alignment can be performed through Homography transformation, Mesh Flow algorithm, Optical Flow algorithm and the like. Determining a reference luminance image and an under-exposed image according to the aligned images, wherein EV of the reference luminance image may be 0 or not, and if the luminance of the EV-2 image is converted into the luminance of the EV0 image through array gain, EV-2 can be used as the reference luminance image; the number of the reference luminance image and the number of the underexposed images can be set according to the actual situation, and the embodiment of the present invention does not limit the description of the number of the underexposed images.
Step S204, acquiring corresponding relations between a plurality of standard color temperature intervals and color channel parameters, and determining effective pixel points of the highlight area based on the color channel parameters corresponding to each pixel point in the highlight area of the underexposed image and the corresponding relations.
In practical application, the highlight area of each under-exposed image includes a plurality of pixels, wherein the pixels include over-exposed pixels and effective pixels, and the over-exposed pixels will cause overexposure of the highlight area, so that the under-exposed image is seriously distorted, and therefore the effective pixels in the highlight area need to be determined, and the over-exposed pixels are discarded.
Because each pixel point corresponds to an R/G value and a B/G value, whether the pixel point is in a standard color temperature interval or not can be determined through the color channel parameters and the corresponding relation of each pixel point, if yes, the pixel point is determined to be an effective pixel point of a highlight area, so that an underexposed image can be optimized according to the effective pixel point, and the vivid effect of a fused image is further ensured.
Step S206, determining calibration parameters of the highlight area based on the color channel parameters corresponding to the effective pixel points in the highlight area, and calibrating the highlight area by using the calibration parameters of the highlight area to obtain the under-exposure optimized image.
In practical application, for an under-exposed image, it is a common practice to perform color restoration on the under-exposed image through an AWB (Automatic white balance) algorithm, that is, to correct a pixel point of the under-exposed image by calculating an AWB correction parameter, so as to obtain a corrected image. Therefore, based on the determined effective pixel points in the highlight area, determining a calibration parameter, namely AWB Gain, corresponding to the highlight area, wherein the color channel parameter of each pixel point comprises an R/G value and a B/G value, so that the calibration parameter AWB Gain in the highlight area comprises an R Gain corresponding to the R/G value and a B Gain corresponding to the B/G value, and correcting each effective pixel point in the highlight area based on the calibration parameter to obtain a corrected underexposed image, namely an underexposed optimized image.
In addition, because the calibration parameters of the high-light area are determined according to the effective pixel points of the high-light area, compared with the prior method in which the calibration parameters are determined directly by all the pixel points passing through the high-light area, the method improves the precision of the calibration parameters, further improves the calibration precision of the high-light area, and ensures the color restoration degree of the underexposure optimization image.
And S208, fusing the reference brightness image and the under-exposure optimization image to obtain a fused image corresponding to the same scene.
Specifically, if only one under-exposed image is included, HDR fusion processing is directly performed on the reference luminance image and the under-exposed optimized image corresponding to the under-exposed image, so as to obtain a fused image corresponding to the same scene. In addition, if the number of the underexposed images is two, wherein the EV corresponding to the first underexposed image is smaller than the EV corresponding to the second underexposed image, and the EV corresponding to the first underexposed image and the EV corresponding to the second underexposed image are both larger than 0; and if the EV corresponding to the first under-exposed image is 2 and the EV corresponding to the second under-exposed image is 4, performing HDR fusion processing on the reference brightness image, the under-exposed optimized image corresponding to the first under-exposed image and the under-exposed optimized image corresponding to the second under-exposed image to obtain a fused image corresponding to the same scene.
And if the EV corresponding to the reference luminance image is not equal to 0; at the moment, performing brightness transformation on the reference brightness image according to an EV transformation rule corresponding to the reference brightness image to obtain a reference optimized image, and fusing the reference optimized image and the underexposed optimized image to obtain a fused image corresponding to the same scene; or fusing the reference brightness image and the underexposed optimized image to obtain a primary fused image corresponding to the same scene, and performing brightness transformation on the primary fused image according to the EV transformation rule corresponding to the reference brightness image to obtain a fused image corresponding to the same scene. For example, n pieces of EV-2 are used as the reference luminance image, and the underexposed image includes the first underexposed image corresponding to EV-4 and the second underexposed image corresponding to EV-6, in this case, the luminance of the reference luminance image may be transformed according to the EV transformation rule corresponding to the reference luminance image, such as EV0 ═ EV-2 × 4, so that the EV corresponding to the obtained reference optimized image is 0, and HDR fusion processing may be performed based on the reference optimized image and the underexposed optimized image, so as to obtain a fusion image corresponding to the same scene; or, first, HDR fusion processing is performed on the reference luminance image corresponding to EV-2, the underexposed optimized image corresponding to EV-4, and the underexposed optimized image corresponding to EV-6 to obtain a preliminary fusion image, and luminance transformation is performed on the preliminary fusion image according to an EV transformation rule corresponding to the reference luminance image to obtain a fusion image corresponding to the same scene.
According to the image fusion method provided by the embodiment of the invention, the calibration parameter of the highlight area is determined through the effective pixel points in the highlight area in the underexposed image, the highlight area is calibrated by using the calibration parameter to obtain the underexposed optimized image, and then the reference brightness image and the underexposed optimized image are subjected to fusion processing to obtain the fusion image corresponding to the same scene.
On the basis of fig. 2, another image fusion method is further provided in an embodiment of the present invention, where an execution subject is the electronic device, where the electronic device prestores correspondence between a plurality of standard color temperature intervals and color channel parameters, and the color channel parameters include: the method mainly describes a process of determining effective pixel points of a highlight region based on color channel parameters and corresponding relations corresponding to each pixel point in the highlight region of an underexposed image, and as shown in fig. 3, the method comprises the following processes:
step S302, carrying out registration alignment on a plurality of images of the same scene to obtain a reference brightness image and at least one under-exposed image which are aligned in space; the EV corresponding to the reference luminance image is different from the EV of the underexposed image.
In step S304, a highlight region of the underexposed image is determined based on the overexposed region of the reference luminance image.
Specifically, after the reference luminance image and the underexposed image are registered and aligned, an overexposure area of the general reference luminance image is slightly larger than a highlight area of the underexposed image, so for the overexposure area of the reference luminance image, three channel values of an R channel, a B channel and a G channel of each pixel point in the overexposure area are generally counted, and if the three channel values are all saturation values, the area where the pixel point is located is considered as the highlight area, so that the highlight area of the underexposed image is determined by judging the channel values of the pixel points in the overexposure area.
Step S306, the color channel value of each pixel point in the highlight area is checked.
Step S308, marking the pixel points with the color channel value larger than or equal to the preset maximum value in the highlight area as invalid pixel points.
Specifically, for three channel values (i.e., color channel values) of an R channel, a B channel, and a G channel of each pixel point in the highlight area, it is determined whether each color channel value is greater than or equal to a preset maximum value, if the preset maximum value is 250(8bit), if the preset maximum value is greater than or equal to the preset maximum value, the pixel point corresponding to the channel value is considered as an overexposed pixel point, i.e., an invalid pixel point. In addition, the R/G value and the B/G value of each pixel point can be calculated according to three color channel values of an R channel, a B channel and a G channel of each pixel point in the highlight area, if one of the R/G value or the B/G value is 255(8bit)/1024(10bit), the pixel point corresponding to the R/G value or the B/G value is also judged to be an invalid pixel point, the invalid pixel point can cause the highlight area to be overexposed, and therefore the underexposed image is seriously distorted, and therefore when the pixel points in the highlight area are counted, the invalid pixel point needs to be discarded.
Step S310, calculating color channel parameters of the pixel points whose color channel values are smaller than the preset maximum value.
In step S312, the pixel point where the color channel parameter falls between the minimum color channel parameter and the maximum color channel parameter in the corresponding relationship is determined as an effective pixel point of the highlight region.
Specifically, after color channel parameters of pixels with color channel values smaller than a preset maximum value are obtained, whether the color channel parameters of each pixel are between the minimum color channel parameter and the maximum color channel parameter in the corresponding relation list is judged based on the corresponding relation between the standard color temperature interval and the color channel parameters, if so, namely the color channel parameters fall within the color channel parameter range corresponding to the high color temperature region or the medium color temperature region or the low color temperature region, the pixel corresponding to the color channel parameters is an effective pixel of the high color region, and if the color channel parameters fall outside the color channel parameter ranges corresponding to the high color temperature region, the medium color temperature region and the low color temperature region, the pixel corresponding to the color channel parameters is still judged to be an ineffective pixel. Meanwhile, for the counted effective pixel points, the color temperature interval corresponding to the effective pixel point can be judged according to the corresponding color channel parameters.
Step S314, determining calibration parameters of the highlight area based on the color channel parameters corresponding to the effective pixel points in the highlight area, and calibrating the highlight area by using the calibration parameters of the highlight area to obtain the under-exposure optimized image.
And step S316, fusing the reference brightness image and the under-exposure optimization image to obtain a fused image corresponding to the same scene.
The above steps S314 to S316 are detailed in the foregoing embodiments, and the embodiments of the present invention are not described in detail herein. Therefore, through the determination process of the effective pixel points, the ineffective pixel points in the highlight area can be discarded, so that the underexposed image is optimized according to the effective pixel points, and the vivid effect of the fused image is ensured.
On the basis of fig. 2, another image fusion method is further provided in an embodiment of the present invention, where an execution subject is an electronic device, where the electronic device pre-stores correspondence between a plurality of standard color temperature intervals and color channel parameters, and the color channel parameters include: the red-green channel ratio and the blue-green channel ratio, the method focuses on how to determine to perform the process of determining the calibration parameters of the highlight region based on the color channel parameters corresponding to the effective pixel points in the highlight region, as shown in fig. 4, the method includes the following processes:
step S402, carrying out registration alignment on a plurality of images of the same scene to obtain a reference brightness image and at least one under-exposed image which are spatially aligned; the EV corresponding to the reference luminance image is different from the EV of the underexposed image.
Step S404, determining effective pixel points of the highlight area based on the color channel parameters and the corresponding relation corresponding to each pixel point in the highlight area of the underexposed image.
The above steps can refer to the foregoing embodiments, and the embodiments of the present invention are not described in detail herein.
Step S406, determining a first target standard color temperature section corresponding to the reference brightness image and a second target standard color temperature section corresponding to the high light region of the underexposed image according to the corresponding relation.
Specifically, according to the color channel parameters and the corresponding relations of the pixel points in the reference luminance image, the color temperature interval of each pixel point in the reference luminance image can be determined, and then the first target standard color temperature interval corresponding to the reference luminance image is determined. In particular, the reference luminance image corresponding to EV0 is mostly used as the reference luminance image, and since the main light source environment in the shooting scene is mostly ambient light with a low color temperature, the first target standard color temperature section corresponding to the reference luminance image is mostly a low color temperature section, and the other color temperature sections can be set according to practical situations, which is not limited in the embodiment of the present invention.
Meanwhile, based on the color channel parameters and the corresponding relation of the effective pixel points in the high-light region of the under-exposed image, a second target standard color temperature interval corresponding to the high-light region can be determined. Since the effective pixel points in the high light region may be in different color temperature ranges, in practical application, the second target standard color temperature range corresponding to the high light region needs to be determined according to the occupation ratio of the effective pixel points in the high light region corresponding to the plurality of standard color temperature ranges.
Specifically, in an underexposed image, a plurality of highlight areas may exist, in practical application, effective pixel points in the underexposed image can be extracted, and the underexposed image is subjected to binarization processing, wherein the effective pixel points are 1, and the ineffective pixel points are 0, so that a mask of the highlight areas is formed, and a certain expansion corrosion operation is performed on the mask, so that a plurality of overexposure areas of 1 in the operated image can be found.
Optionally, the region with the most pixels in the multiple highlight regions may be used as an effective region, and a second target standard color temperature interval corresponding to the highlight region is determined based on the percentage of the effective pixel point in the effective region corresponding to the multiple standard color temperature intervals, if a certain underexposed image has 3 highlight areas, after counting the effective pixel points of each highlight area, the number of effective pixel points in the highlight region 1 is A1, the number of effective pixel points in the highlight region 2 is A2, the number of effective pixel points in the highlight region 3 is A3, A1 is more than A2 is more than A3, the high light region 3 can be used as an effective region, and a second target standard color temperature section corresponding to the high light region 3 can be determined according to the occupation ratio of a plurality of standard color temperature sections corresponding to effective pixel points in the high light region 3, and the second target standard color temperature section corresponding to the high light region 3 is determined as the second target standard color temperature section corresponding to the high light region of the underexposed image.
Optionally, the second target standard color temperature section corresponding to each high light region may be determined according to the ratio of the effective pixel point of each high light region in the underexposed image to the plurality of standard color temperature sections, for example, the second target standard color temperature sections corresponding to the high light region 1, the high light region 2, and the high light region 3 in the underexposed image are determined according to the ratio of the plurality of standard color temperature sections corresponding to the effective pixel points in the high light region 1, the high light region 2, and the high light region 3, respectively. Or counting the occupation ratios of the effective pixel points in all the high-light regions corresponding to the plurality of standard color temperature intervals, and determining the second target standard color temperature intervals corresponding to all the high-light regions according to the counted occupation ratios. For example, the method for counting the occupation ratios of the plurality of standard color temperature sections corresponding to all the effective pixels in the high light region 1, the high light region 2, and the high light region 3, determining the second target standard color temperature section corresponding to the high light region 1, the high light region 2, and the high light region 3 according to the occupation ratios, and specifically determining the second target standard color temperature section corresponding to the high light region may be set according to actual conditions, which is not limited in the embodiments of the present invention.
In addition, the standard color temperature section includes a high color temperature section, a middle color temperature section and a low color temperature section; the process of determining the second target standard color temperature section corresponding to the highlight region according to the percentage of the effective pixel points in the highlight region corresponding to the plurality of standard color temperature sections is specifically as follows:
(1) counting a first ratio of first type effective pixel points belonging to a high color temperature interval, a second ratio of second type effective pixel points belonging to a medium color temperature interval and a third ratio of third type effective pixel points belonging to a low color temperature interval in a high light region;
(2) comparing the first ratio value, the second ratio value and the third ratio value;
(3) if the first ratio value is maximum and is larger than a preset ratio threshold, determining that the high light area corresponds to a high color temperature interval;
(4) and if the first ratio is smaller than or equal to the preset ratio threshold, setting the effective pixel point corresponding to the minimum ratio among the first ratio, the second ratio and the third ratio as an ineffective pixel point, and determining a second target standard color temperature interval of the highlight area based on the color channel parameter corresponding to the current effective pixel point in the highlight area.
For example, for a high light area, the preset ratio threshold is 50%, if the first ratio value of the first type effective pixel points belonging to the high color temperature interval is the largest and exceeds 50%, the high light area is determined to be the high color temperature interval, and the pixel points in other color temperature intervals of the high light area are discarded; and if the first ratio of the first type of effective pixel points belonging to the high color temperature interval and the second ratio of the second type of effective pixel points belonging to the medium color temperature interval do not exceed 50 percent and are both greater than the third ratio of the third type of effective pixel points belonging to the low color temperature interval, the third type of effective pixel points are considered as invalid pixel points, and the color channel values of the first type of effective pixel points and the second type of effective pixel points are calculated to determine the second target standard color temperature interval of the high light area.
Step S408, comparing whether the first target standard color temperature interval is the same as the second target standard color temperature interval; if not, step S410 is performed, and if the same, step S412 is performed.
In practical applications, only when the calibration parameter AWB Gain of the highlight region and the calibration parameter AWB Gain of the reference luminance image are largely different, the highlight region will be color-shifted. The calibration parameter AWB Gain of the reference luminance image corresponding to EV0 is mainly affected by the dominant illuminant, and it is assumed that the environment of the dominant illuminant in the current shooting scene is an ambient light with a low color temperature, but the highlight area is a light of a high color temperature, the calibration parameter AWB Gain calculated for the reference luminance image is determined based on the main ambient light, and therefore, the calibration parameter AWB Gain of the dominant light source in the reference luminance image and the calibration parameter AWB Gain of the ambient light are relatively close, and the calibration parameter AWB Gain of the dominant light source is suitable for the scene with low color temperature, and the high light region of the reference brightness image is light with high color temperature, even if the calibration parameter AWB Gain of the dominant light source with low color temperature is applied, the dominant light source still has a high light area, therefore, the highlight region of the reference luminance image corresponding to EV0 is not affected, i.e., the dominant light source does not need to undergo AWB processing.
And the high light area of the underexposed image corresponding to EV-2 is an underexposed state, when the calibration parameter AWB Gain calculated according to the low color temperature ambient light is applied to the high color temperature interval, because Blue Gain in the AWB Gain is larger, it will cause the high color temperature interval of the underexposed image corresponding to EV-2 to be bluish, therefore, it needs to judge which standard color temperature interval the calibration parameter AWB Gain of the reference luminance image corresponding to EV0 belongs to, only when the judged color temperature interval of the high light area of the underexposed image is not consistent with the color temperature interval of the reference luminance image, the calibration parameter of the high light area is determined, otherwise, the calibration parameter AWB Gain of the reference luminance image is directly used to calibrate the high light area, therefore, it needs to compare whether the first target standard interval is the same as the second target standard interval, if the color temperature is different, then step S410 is executed, if so, step S412 is performed.
Step S410, determining calibration parameters of the highlight area based on the color channel parameters corresponding to the effective pixel points in the highlight area, and calibrating the highlight area by using the calibration parameters of the highlight area to obtain an under-exposure optimization image.
Specifically, the mean value of the color channel parameters corresponding to all effective pixel points in the highlight area can be calculated; and determining the calibration parameters of the high-light region based on the calculated mean value, i.e., R Gain 1/(R/G _ mean) and B Gain 1/(B/G _ mean) in the calibration parameters of the high-light region. Or calculating the mean value of the color channel parameters corresponding to the effective pixel points belonging to the second target standard color temperature interval in the highlight region; and determining a calibration parameter of the high-light area based on the calculated mean value, if the second target standard color temperature interval of the high-light area is a high color temperature interval, then in the calibration parameter of the high-light area, R Gain is 1/(R/G _ high color temperature mean value), and B Gain is 1/(B/G _ high color temperature mean value), thereby eliminating the influence of a small area corresponding to an invalid pixel point in the high-light area, only reserving an area corresponding to an effective pixel point with a large weight in the high-light area, improving the precision of the calibration parameter, further improving the calibration precision of the high-light area, and ensuring the color reduction degree of the under-exposure optimized image.
Step S412, calibrating the highlight area based on the calibration parameters of the reference brightness image to obtain an under-exposure optimization image.
Specifically, the calibration parameters of the reference luminance image are directly used for calibrating the highlight area, and an underexposure optimization image is obtained. For example, for a dim yellow street lamp scene, the overall color tone of the whole environment is yellow, and the calibration parameter AWB Gain of the ambient light is mainly used for compensating the yellow environment, that is, the calibration parameter AWB Gain of the reference luminance image is mainly used for compensating the yellow environment; and the calibration parameter calculated by the yellow street lamp, namely the calibration parameter AWB Gain of the under-exposed image, is still the color temperature environment needing to be compensated for yellow, so that the calibration parameter AWB Gain of the reference brightness image can be adopted as the calibration parameter of the under-exposed image for ensuring the uniformity of the picture style, and the correction is carried out to obtain the under-exposed optimized image.
And step S414, fusing the reference brightness image and the under-exposure optimization image to obtain a fused image corresponding to the same scene.
Therefore, the first target standard color temperature interval of the reference brightness image and the second target standard color temperature interval of the underexposure image are compared for determining whether the first target standard color temperature interval is the same as the second target standard color temperature interval of the underexposure image, so as to determine the calibration parameter of the highlight area in the underexposure image, namely, the calibration parameter of the reference brightness image is adopted as the calibration parameter of the highlight area when the first target standard color temperature interval is the same as the second target standard color temperature interval of the underexposure image, and the calibration parameter of the highlight area is not determined according to the color channel parameter corresponding to the effective pixel point in the highlight area at the same time, so that the calibration parameter of the highlight area is selected adaptively, the precision of the calibration parameter is improved, the color restoration.
In another possible embodiment, an embodiment of the present invention further provides a method for generating a correspondence between a standard color temperature interval and a color channel parameter, as shown in fig. 5, where the method includes the following steps:
step S502, acquiring a photographed image of the electronic device under a plurality of preset standard color temperature light sources.
Specifically, photographing is performed by a camera under 7500K, 6500K, 5000K, 4000K, 3000K and 2300K standard color temperature light sources, and corresponding photographed images are obtained respectively. It should be noted that, the standard color temperature light source may be set according to practical situations, and this is not limited to be described in the embodiment of the present invention.
Step S504, counting and determining color channel parameters of the photographed image corresponding to each preset standard color temperature light source;
and respectively counting the red-green channel ratio (R/G) and the blue-green channel ratio (B/G) of each pixel point under each standard color temperature light source for the photographed images under the plurality of preset standard color temperature light sources, so as to divide the color temperature interval of the preset standard color temperature light sources according to the corresponding color channel parameters.
Step S506, generating a corresponding relationship between each of the standard color temperature intervals and the color channel parameter based on the color channel parameter corresponding to each of the preset standard color temperature light sources.
Specifically, according to the R/G value and the B/G value of each pixel point under each standard color temperature light source, a color temperature interval is divided into 3 color temperature intervals, namely 5000K-7500K of a high color temperature interval, 3000K-5000K of a medium color temperature interval and 2300K-3000K of a low color temperature interval, and corresponding relations between a plurality of standard color temperature intervals and color channel parameters are established so as to distinguish the color temperature intervals according to the color channel parameters.
In addition, the color temperature range can be expanded by setting a reasonable error range, if the error range is set to be +/-15%, namely when the R/G value and the B/G value of a certain pixel point in the image are within +/-15% of the color temperature range, the pixel point is also considered as an effective pixel point. For example, the R/G value corresponding to D75(7500K) is 0.3, the B/G value is 0.6, after error calculation, the color temperature range of D75 is R/G ═ 0.255, 0.345, and B/G ═ 0.51, 0.69, and when the R/G value and B/G value of a certain pixel fall within the above range, the pixel is considered as an effective pixel, and because D75 belongs to the high color temperature range, the pixel is an effective pixel in the high color temperature region.
It should be noted that when a pixel in a high light region is located in the color temperature range of D75, the pixel is considered to be a pixel in D75; when a certain pixel point is in a transitional color temperature interval, if the pixel point belongs to both a high color temperature interval and a medium color temperature interval, the pixel point is classified into the high color temperature interval for calculation and also classified into the medium color temperature interval for calculation; and when a certain pixel point is at the junction of D75 and D65(6500K), the pixel point is considered to belong to the pixel point in the high color temperature range.
For convenience of understanding, the EV value corresponding to the reference luminance image is 0, the EV value corresponding to the first underexposed image is 2, and the EV value corresponding to the second underexposed image is 4. In the existing method, a high light region of a reference brightness image EV0 and a high light region of a first underexposed image EV2 are fused, and then a high light region of a second underexposed image EV4 is fused, because the high light region of the first underexposed image EV2 and the high light region of the second underexposed image EV4 apply calibration parameters of the reference brightness image EV0, in a low-color-temperature light source, the components of an R channel are more, the components of a B channel are less, and more B Gains need to be compensated for realizing the normal white balance of the low-color-temperature light source, so that the numerical values of the three channels of the R channel, the G channel and the B channel are closer; however, the high-light region of the high color temperature is just opposite, that is, the B channel component is more, the R channel component is less, and theoretically, more R Gain needs to be compensated, and when the calibration parameter AWB Gain of the fused reference luminance image EV0 of the low color temperature is applied to the high-light region of the first underexposed image EV2 and the high-color temperature region of the second underexposed image EV4, the color of the high-color temperature region will be bluish, and the high-light region in the fused image will be bluish, that is, the color of the high-light region in the fused image will be abnormal, and the actual application requirements cannot be met.
As shown in fig. 6, the image fusion method according to the embodiment of the present invention includes the following steps:
(61) firstly, calibrating a camera, specifically, taking a picture of the same scene under a 2300K light source-7500K light source through the camera, and counting color channel parameters of each image, namely, counting an R/G value and a B/G value to establish corresponding relations between a plurality of standard color temperature intervals and the color channel parameters respectively; setting a reasonable error range of +/-15%, namely setting the R/G value and the B/G value of each pixel point in the high light region to be within +/-15% of the standard color temperature range, namely setting a calibration data range corresponding to the standard color temperature range, namely, setting the pixel points as effective pixel points;
(62) carrying out registration alignment on the reference brightness image EV0, the first under-exposed image EV2 and the second under-exposed image EV 4;
(63) performing highlight overexposure area detection on the reference luminance image EV0 to determine a highlight area of the reference luminance image EV 0;
(64) determining a highlight region of the first under-exposed image EV2 based on the highlight region of the reference luminance image EV0, specifically, extracting a highlight region boundary of the reference luminance image EV0, and determining the highlight region of the first under-exposed image EV2 according to the highlight region boundary; and performing steps (641) - (647);
(641) calculating the R/G value and the B/G value of each pixel point in the highlight area of the first underexposed image EV 2;
(642) judging whether each pixel point is an overexposure pixel point (namely an invalid pixel point), if so, discarding the pixel point, and if not, executing the step (643);
(643) judging whether the R/G value and the B/G value of the pixel point are in the calibration data range, if not, discarding the pixel point, if so, determining the pixel point as an effective pixel point of a highlight area, and executing the step (644);
(644) calculating the R/G value and the B/G value of each effective pixel point in the highlight area, and judging a standard color temperature interval corresponding to the effective pixel point;
(645) the standard color temperature interval corresponding to the highlight region is determined based on the standard color temperature interval corresponding to each effective pixel point, which may specifically refer to the foregoing embodiment, and details are not repeated herein in the embodiment of the present invention;
(646) determining calibration parameters of the highlight area based on color channel parameters corresponding to effective pixel points in the highlight area, namely calculating R Gain as 1/(R/G _ mean), and B Gain as 1/(B/G _ mean);
(647) calibrating the highlight area by using the calibration parameters of the highlight area to obtain an underexposure optimization image corresponding to the first underexposure image EV 2; specifically, multiplying each effective pixel point in a highlight area of the first under-exposed image EV2 by R Gain and B Gain to perform color restoration on the first under-exposed image EV2 to obtain an under-exposed optimized image corresponding to the first under-exposed image EV 2;
(65) determining a highlight region of the second underexposed image EV4 based on the highlight region of the first underexposed image EV2, and performing steps (651) - (657), where the specific method may refer to step (64), and may also determine the highlight region of the second underexposed image EV4 based on the highlight region of the reference luminance image EV0, which is not described in detail herein in the embodiment of the present invention;
(651) calculating the R/G value and the B/G value of each pixel point in the highlight area of the second underexposed image EV 4;
(652) judging whether each pixel point is an overexposure pixel point (namely an invalid pixel point), if so, discarding the pixel point, and if not, executing the step (653);
(653) judging whether the R/G value and the B/G value of the pixel point are in the calibration data range, if not, discarding the pixel point, if so, determining the pixel point as an effective pixel point of a highlight area, and executing the step (654);
(654) calculating the R/G value and the B/G value of each effective pixel point in the highlight area, and judging a standard color temperature interval corresponding to the effective pixel point;
(655) judging a standard color temperature interval corresponding to the highlight area based on the standard color temperature interval corresponding to each effective pixel point;
(656) determining calibration parameters of the highlight area based on color channel parameters corresponding to effective pixel points in the highlight area, namely calculating R Gain as 1/(R/G _ mean), and B Gain as 1/(B/G _ mean);
(657) calibrating the highlight area by using the calibration parameters of the highlight area to obtain an underexposure optimization image corresponding to the second underexposure image EV 4; specifically, multiplying each effective pixel point in a highlight area of the second underexposed image EV4 by R Gain and B Gain to perform color restoration on the second underexposed image EV4 to obtain an underexposed optimized image corresponding to the second underexposed image EV 4;
(66) and performing fusion processing on the reference brightness image EV0, the underexposed optimized image corresponding to the first underexposed image EV2 and the underexposed optimized image corresponding to the second underexposed image EV4 to obtain a fused image corresponding to the same scene.
Therefore, according to the image fusion method, the calibration parameters of the highlight area are determined through the effective pixel points in the highlight area in the underexposure image, the calibration parameters are used for calibrating the highlight area to obtain the underexposure optimized image, the reference brightness image and the underexposure optimized image are subjected to fusion processing to obtain the fusion image corresponding to the same scene, compared with the underexposure image fusion processing in the existing method in which the reference brightness image and the underexposure image calibrated by the calibration parameters of the reference brightness image are directly subjected to fusion processing, the problem of color cast of the highlight area of the fusion image is relieved, the vivid effect of the fusion image is guaranteed, and the practical value of the fusion image is improved.
On the basis of the foregoing embodiment, an embodiment of the present invention further provides an image fusion apparatus, which is applied to an electronic device, and as shown in fig. 7, the apparatus includes a registration alignment module 71, an effective pixel point determining module 72, an underexposure optimization image obtaining module 73, and a fusion processing module 74, which are connected in sequence, where functions of each module are as follows:
a registration alignment module 71, configured to perform registration alignment on multiple images of the same scene to obtain a spatially aligned reference luminance image and at least one under-exposed image; wherein, the exposure value EV corresponding to the reference brightness image is different from the EV of the underexposed image;
an effective pixel point determining module 72, configured to obtain corresponding relationships between a plurality of standard color temperature intervals and color channel parameters, and determine an effective pixel point in a highlight region based on the color channel parameter and the corresponding relationship corresponding to each pixel point in the highlight region of the underexposed image;
the under-exposure optimization image acquisition module 73 is configured to determine a calibration parameter of the highlight region based on a color channel parameter corresponding to an effective pixel point in the highlight region, and calibrate the highlight region by using the calibration parameter of the highlight region to obtain an under-exposure optimization image;
and the fusion processing module 74 is configured to perform fusion processing on the reference luminance image and the under-exposure optimized image to obtain a fusion image corresponding to the same scene.
The image fusion device provided by the embodiment of the invention determines the calibration parameter of the highlight area through the effective pixel points in the highlight area in the underexposed image, and calibrates the highlight area by using the calibration parameter to obtain the underexposed optimized image, and then performs fusion processing on the reference brightness image and the underexposed optimized image to obtain the fusion image corresponding to the same scene.
In one possible embodiment, the effective pixel point determining module 72 is further configured to: determining a highlight region of an underexposed image based on an overexposed region of a reference luminance image; checking the color channel value of each pixel point in the highlight area; marking the pixel points with the color channel value larger than or equal to the preset maximum value in the highlight area as invalid pixel points; calculating color channel parameters of pixel points with color channel values smaller than a preset maximum value; and determining the pixel point of which the color channel parameter is between the minimum color channel parameter and the maximum color channel parameter in the corresponding relation as an effective pixel point of the highlight area.
In another possible embodiment, before the under-exposure optimization image obtaining module 73, the apparatus is further configured to: determining a first target standard color temperature interval corresponding to the reference brightness image and a second target standard color temperature interval corresponding to a high light region of the under-exposed image according to the corresponding relation; comparing whether the first target standard color temperature interval is the same as the second target standard color temperature interval; if not, performing calibration parameter determination for the highlight region based on the color channel parameters corresponding to the active pixel points in the highlight region.
In another possible embodiment, the determining, according to the correspondence, a first target standard color temperature section corresponding to the reference luminance image and a second target standard color temperature section corresponding to a high light region of the underexposed image includes: determining a first target standard color temperature interval corresponding to the reference brightness image based on the color channel parameters and the corresponding relation of each pixel point in the reference brightness image; and determining a second target standard color temperature interval corresponding to the high light region based on the color channel parameters and the corresponding relation of the effective pixel points in the high light region of the underexposed image.
In another possible embodiment, the determining a second target standard color temperature section corresponding to a high light region based on the color channel parameters and the corresponding relationship of the effective pixel points in the high light region of the underexposed image includes: and determining a second target standard color temperature interval corresponding to the high light region according to the proportion of the effective pixel points in the high light region to the plurality of standard color temperature intervals.
In another possible embodiment, the high light area is plural; the determining of the second target standard color temperature section corresponding to the highlight region according to the proportion of the effective pixel points in the highlight region corresponding to the plurality of standard color temperature sections includes one of the following: taking the area with the most pixels in the multiple highlight areas as an effective area, and determining a second target standard color temperature interval corresponding to the highlight area based on the proportion of the effective pixel points in the effective area to the multiple standard color temperature intervals; determining a second target standard color temperature interval corresponding to each high light region according to the ratio of effective pixel points in each high light region to a plurality of standard color temperature intervals; and counting the occupation ratios of the effective pixel points in all the high-light regions corresponding to the plurality of standard color temperature intervals, and determining second target standard color temperature intervals corresponding to all the high-light regions according to the counted occupation ratios.
In another possible embodiment, the plurality of standard color temperature intervals are a high color temperature interval, a middle color temperature interval and a low color temperature interval respectively; the determining a second target standard color temperature section corresponding to the highlight region according to the proportion of the effective pixel points in the highlight region to the plurality of standard color temperature sections includes: counting a first ratio of first type effective pixel points belonging to a high color temperature interval, a second ratio of second type effective pixel points belonging to a medium color temperature interval and a third ratio of third type effective pixel points belonging to a low color temperature interval in a high light region; comparing the first ratio value, the second ratio value and the third ratio value; if the first ratio value is maximum and is larger than a preset ratio threshold, determining a high color temperature interval corresponding to the high light area; and if the first ratio is smaller than or equal to the preset ratio threshold, setting the effective pixel point corresponding to the minimum ratio among the first ratio, the second ratio and the third ratio as an ineffective pixel point, and determining a second target standard color temperature interval of the highlight area based on the color channel parameter corresponding to the current effective pixel point in the highlight area.
In another possible embodiment, the under-exposure optimization image obtaining module 73 is further configured to: calculating the mean value of color channel parameters corresponding to all effective pixel points in the highlight area; the calibration parameters for the highlight region are determined based on the calculated mean.
In another possible embodiment, the under-exposure optimization image obtaining module 73 is further configured to: calculating the mean value of the color channel parameters corresponding to the effective pixel points belonging to the second target standard color temperature interval in the highlight region; the calibration parameters for the highlight region are determined based on the calculated mean.
In another possible embodiment, the number of the underexposed images is two, the EV corresponding to the first underexposed image is smaller than the EV corresponding to the second underexposed image, and the EVs corresponding to the first underexposed image and the second underexposed image are both larger than 0; the fusion processing module 74 is further configured to: and performing high dynamic range HDR fusion processing on the reference brightness image, the underexposed optimized image corresponding to the first underexposed image and the underexposed optimized image corresponding to the second underexposed image to obtain a fusion image corresponding to the same scene.
In another possible embodiment, the EV corresponding to the reference luminance image is equal to 0.
In another possible embodiment, the EV for the reference luminance image is not equal to 0; the fusion processing module 74 is further configured to: according to the EV transformation rule corresponding to the reference brightness image, performing brightness transformation on the reference brightness image to obtain a reference optimized image, and fusing the reference optimized image and the underexposed optimized image to obtain a fused image corresponding to the same scene; or fusing the reference brightness image and the underexposed optimized image to obtain a primary fused image corresponding to the same scene, and performing brightness transformation on the primary fused image according to the EV transformation rule corresponding to the reference brightness image to obtain a fused image corresponding to the same scene.
In another possible embodiment, the apparatus is further configured to: acquiring shot images under a plurality of preset standard color temperature light sources; counting and determining color channel parameters of the photographed image corresponding to each preset standard color temperature light source; and generating corresponding relations between a plurality of standard color temperature intervals and the color channel parameters respectively based on the color channel parameters corresponding to each preset standard color temperature light source.
The image fusion device provided by the embodiment of the invention has the same technical characteristics as the image fusion method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The embodiment of the invention also provides electronic equipment which comprises a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to realize the image fusion method.
Referring to fig. 8, the electronic device includes a processor 80 and a memory 81, the memory 81 stores machine executable instructions capable of being executed by the processor 80, and the processor 80 executes the machine executable instructions to implement the image fusion method.
Further, the electronic device shown in fig. 8 further includes a bus 82 and a communication interface 83, and the processor 80, the communication interface 83, and the memory 81 are connected through the bus 82.
The Memory 81 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 83 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, etc. may be used. The bus 82 may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Enhanced Industry Standard Architecture) bus, or the like. The above-mentioned bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one double-headed arrow is shown in FIG. 8, but that does not indicate only one bus or one type of bus.
The processor 80 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 80. The Processor 80 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory 81, and the processor 80 reads information in the memory 81 and performs the steps of the method of the previous embodiment in combination with hardware thereof.
The present embodiments also provide a machine-readable storage medium having stored thereon machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the image fusion method described above.
The image fusion method, the image fusion device and the computer program product of the electronic system provided by the embodiment of the present invention include a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment, and will not be described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.