Nothing Special   »   [go: up one dir, main page]

CN113920031A - Image adjusting method and device, electronic equipment and computer readable storage medium - Google Patents

Image adjusting method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113920031A
CN113920031A CN202111248981.8A CN202111248981A CN113920031A CN 113920031 A CN113920031 A CN 113920031A CN 202111248981 A CN202111248981 A CN 202111248981A CN 113920031 A CN113920031 A CN 113920031A
Authority
CN
China
Prior art keywords
image
bit
split
brightness
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111248981.8A
Other languages
Chinese (zh)
Inventor
何慕威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111248981.8A priority Critical patent/CN113920031A/en
Publication of CN113920031A publication Critical patent/CN113920031A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to an image adjusting method, an image adjusting device, a computer device and a storage medium. The method comprises the following steps: acquiring an image to be adjusted; stretching the image to be adjusted to obtain a high-bit-width image; splitting the high-bit-width image according to a preset gain value to obtain split images with different brightness; and calculating to obtain a fusion weight of the split image according to the pixel value of the split image, and fusing the split image by using the fusion weight to obtain a target image. By adopting the method, the brightness and the dynamic range of the image can be adjusted to the proper range, so that the brightness and the dynamic effect of the adjusted image are effectively improved.

Description

Image adjusting method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image adjusting method, an image adjusting apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of the technology in the field of computer technology, in some specific scenes in the field of 3D vision, brightness adjustment and dynamic range adjustment are required to be performed on an image, so that images meeting the requirements of different scenes can be obtained.
However, in the conventional image adjustment method, the brightness of the image is usually adjusted directly by using the tone mapping module, and since it is difficult to adjust the brightness and the dynamic range of the image to a suitable range, a good image adjustment effect cannot be obtained in different scenes.
Disclosure of Invention
The embodiment of the application provides an image adjusting method, an image adjusting device, electronic equipment and a computer readable storage medium, which can ensure that the brightness and the dynamic range of an image are adjusted to a proper range, thereby effectively improving the brightness and the dynamic effect of the adjusted image.
An image adjustment method, comprising:
acquiring an image to be adjusted;
stretching the image to be adjusted to obtain a high-bit-width image;
splitting the high-bit-width image according to a preset gain value to obtain split images with different brightness;
and calculating to obtain a fusion weight of the split image according to the pixel value of the split image, and fusing the split image by using the fusion weight to obtain a target image.
An image adjusting apparatus comprising:
the acquisition module is used for acquiring an image to be adjusted;
the processing module is used for stretching the image to be adjusted to obtain a high-bit-width image;
the splitting processing module is used for splitting the high-bit-width image according to a preset gain value to obtain split images with different brightness;
the calculation module is used for calculating the fusion weight of the split image according to the pixel value of the split image;
and the fusion module is used for fusing the split images by using the fusion weight to obtain a target image.
An electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the computer program causes the processor to execute the steps of the image adjusting method when executed by the processor.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.
According to the image adjusting method, the image adjusting device, the computer equipment and the storage medium, the image to be adjusted is obtained, the image to be adjusted is stretched to obtain the high-bit-width image, and the high-bit-width image is split according to the preset gain value to obtain split images with different brightness. And calculating fusion weights of the split images with different brightness according to the pixel values of the split images, and fusing the split images by using the fusion weights to obtain a target image. The optimal fusion weight is selected for each pixel when the fusion weight is calculated, the fusion weight is used as an adjustment parameter, the contribution proportion of each split image in the target image is adjusted, namely the performance of the image brightness under different scenes can be flexibly controlled by selecting the optimal weight in each split image pixel, the optimal fusion weight can ensure that the brightness and the dynamic state of the finally obtained target image are adjusted to a proper range, namely, the brightness of the image to be adjusted is split, and the split images are fused, so that the brightness and the dynamic state of the image to be adjusted are adjusted to the proper range, and the brightness and the dynamic effect of the adjusted image are effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram illustrating an exemplary embodiment of an image adjustment method;
FIG. 2 is a flow diagram of a method for image adjustment in one embodiment;
FIG. 3 is a graph of pixel values or luminance values in one embodiment;
fig. 4 is a flowchart illustrating a step of splitting a high-bit-width image according to a preset gain value to obtain split images with different brightness in one embodiment;
FIG. 5 is a flowchart of the step of calculating the fusion weight of at least two split images with different luminance in one embodiment;
FIG. 6 is a flow diagram that illustrates the processing of the HDR fused image by the mobile terminal in one embodiment;
FIG. 7 is a flow diagram that illustrates the processing of the image stretch brightening module in one embodiment;
FIG. 8 is a flow diagram that illustrates the processing of Laplace fusion for 8BIT images of different luminance in one embodiment;
FIG. 9 is a block diagram showing the structure of an image adjusting apparatus according to an embodiment;
FIG. 10 is a diagram illustrating an internal structure of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
Fig. 1 is a schematic diagram of an application environment of an image adjustment method in an embodiment. As shown in fig. 1, the application environment includes a terminal 102, and the application environment may be an environment in which a user interacts with the terminal 102. The terminal 102 has a camera 104 mounted therein, and the camera 104 may be a front, rear, rotary, or pop-up camera. The terminal 102 can acquire an image to be adjusted acquired through the camera 104, and the terminal 102 stretches the image to be adjusted to obtain a high-bit-width image; the terminal 102 splits the high bit width image according to a preset gain value to obtain split images with different brightness; the terminal 102 calculates fusion weights of split images with different brightness according to the pixel values of the split images, and fuses the split images by using the fusion weights to obtain a target image. After the terminal 102 obtains the target images, the terminal 102 may store the target images in a classified manner, or the terminal 102 sends the target images to the server corresponding to the identification information according to the identification information corresponding to the target images. The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server may be implemented by an independent server or a server cluster formed by a plurality of servers. It is understood that the image adjustment method provided by the embodiment of the present application may also be executed by a server.
FIG. 2 is a flow diagram of an image adjustment method in one embodiment. The image adjusting method in this embodiment is described by taking the example of the method performed in the terminal in fig. 1. As shown in fig. 2, the image adjustment method includes steps 202 to 208.
Step 202, acquiring an image to be adjusted.
The image to be adjusted refers to an image which needs to be adjusted. Due to the complex environment of the object, the images captured in certain specific scenes may be overexposed and darker, so that the visual effect of the captured images is poor, and further adjustment of the captured images is required. The image to be adjusted may include an image shot by the terminal, a video stream image collected by the camera, an image obtained by the terminal from a network, or the like. The image to be adjusted in the embodiment of the application may be a High Dynamic Range image, or a High-Dynamic Range (HDR for short), which can provide more Dynamic ranges and image details, and can better reflect the visual effect in the real environment.
Specifically, the terminal can acquire an image to be adjusted shot by the camera. In addition, the terminal may also download an image from a server or other cloud platforms, and use the downloaded image as an image to be adjusted, where the manner of obtaining the image to be adjusted is not limited. It can be understood that the image to be adjusted in this example may also be a high dynamic range fused image obtained by fusing a bright frame image and a dark frame image, that is, the terminal may obtain an HDR fused image, and use the obtained HDR fused image as the image to be adjusted.
For example, a user may start an application program with an image recognition function in the terminal through a trigger operation, and the application program calls a camera built in the terminal to acquire an image to be adjusted.
And step 204, stretching the image to be adjusted to obtain a high-bit-width image.
The stretching processing refers to performing linear stretching on the image to be adjusted, the linear stretching of the image is to transform all pixel points in the image to be adjusted according to a linear transformation function, and the linear stretching can include direct linear stretching, cutting linear stretching, sectional stretching and the like. In the captured image, for example, in a certain wavelength band, the gradation value is concentrated in a certain section, and if the stretching is not performed, the detail display is not made conspicuous, so that it is necessary to perform the linear stretching processing. The principle of linear stretching can be understood that the color values of a certain image are distributed in a certain small proportion area, and if the image is an 8-bit image, the full image influence is more distributed in a certain area, and the area needs to be expanded into the whole 8-bit color domain. Such as: one frame of image with the brightness value or the chromatic value ranging from 4 to 50 cannot display clearer details, and if the brightness value or the chromatic value of the image is subjected to stretching processing, namely the range of the brightness value or the chromatic value is stretched from 4 to 50 to 0 to 255, the image which displays more details after stretching processing can be obtained.
The high BIT width image refers to an image with a BIT width greater than or equal to a preset threshold, for example, assuming that the preset threshold is 8BIT, the terminal may use the image with a BIT width greater than or equal to 8BIT as the high BIT width image. The BIT width refers to the amount of data that can be transmitted in one time through the memory or the video memory, for example, the output format of the image is 8BIT, that is, the 8BIT in the image means that the luminance value or the chrominance value of the frame of image is stored by using a variable with the BIT width of 8BIT, so the range of the luminance value or the chrominance value is 0-255. In the case of a 10BIT image, the range of luminance values or chrominance values is 0-1023.
Specifically, after the terminal acquires the image to be adjusted, the terminal may perform linear stretching processing on the image to be adjusted to obtain a high-bit-width image. For example, the terminal may automatically generate a corresponding gain value based on a preset gain function, and perform linear operation on the image to be adjusted and the gain value to obtain a high-bit-width image.
For example, assuming that the image to be adjusted acquired by the terminal is an 8BIT image, the terminal may acquire a gain value automatically generated based on a preset gain function as 4, and the terminal multiplies the 8BIT image by the gain value 4, that is, the image to be adjusted is multiplied by 4 pixel by pixel, so as to obtain a 10BIT image. In addition, the terminal can further use the nonlinear tone mapping to adjust the brightness of the 10BIT image, and the brightened high-BIT-width image can be obtained. It is understood that the linear stretching and the non-linear tone mapping adopted in the embodiment may be integrated into one execution module in a computer program, for example, the linear stretching and the non-linear tone mapping may be integrated as a function of the image stretching and brightening module, and the terminal may call the image stretching and brightening module to realize the performance of stretching and brightening, etc.
And step 206, splitting the high-bit-width image according to a preset gain value to obtain split images with different brightness.
Here, the gain value refers to an amplification factor, and for example, the gain value may be set to 0 power of 2, 2/3 power of 2, 4/3 power of 2, 2 power of 2, and the like. The gain value in the embodiment of the present application may be a preset fixed gain value, or may also be a calculation method in which a gain value is preset, and the system automatically generates a corresponding gain value according to the calculation method, where the calculation method and the generation method of the gain value are not limited.
The split image is an image with different brightness obtained by splitting the high-bit-width image according to different gain values, and is the split image. For example, if the terminal acquires 3 different gain values, there are 3 corresponding split images. In the embodiment, split images with different brightness are used for simulating multi-exposure images.
Specifically, the terminal may obtain the gain based on a preset gain function orAnd the user automatically generates a gain value based on a preset gain value calculation formula, and splits the high-bit-width image according to the gain value to obtain split images with different brightness. For example, assume that the terminal obtains a corresponding gain value of 2 based on a preset gain function0、21And 22And the terminal splits the high-BIT-width image into 3 8BIT images according to the obtained gain value, so that 3 8BIT images with different brightness can be obtained, namely, 3-level multi-frame images with different exposures are simulated. In addition, when the terminal performs splitting processing on the high-bit-width image, the pixel value of the output image after splitting needs to satisfy a preset brightness condition, which may also be configured in advance, for example, the range of the average brightness value of the image needs to be between 0 and 255.
And 208, calculating to obtain a fusion weight of the split image according to the pixel value of the split image, and fusing the split image by using the fusion weight to obtain a target image.
The fusion weight refers to weights corresponding to different split images. For example, the weight corresponding to the first split image may be a first fusion weight, the weights corresponding to different split images may be different, the fusion weight is used as an adjustment parameter, and the contribution proportion of each split image in the target image is adjusted, that is, the performance of the image brightness in different scenes can be flexibly controlled by selecting the optimal weight in each split image pixel, and the optimal weight can ensure that the brightness and the dynamic state of the image to be adjusted in different scenes are adjusted to a proper range.
The target image is an image obtained by fusing the split images according to the fusion weight, the brightness and the dynamic state of the obtained target image are adjusted to a proper range, and the brightness and the dynamic effect of the image to be adjusted are effectively improved. For example, a frame of image to be adjusted with a dark whole color is adjusted, and the output target image can display more details.
Specifically, the terminal can calculate fusion weights of split images with different brightness based on a preset fusion weight function, and fuse the split images according to the fusion weights to obtain a target image; and the fusion weight is used for representing the contribution proportion of each split image in the target image.
For example, the terminal may calculate fusion weights of split images with different luminances based on a preset fusion weight function, and perform Laplacian Pyramid (LP) fusion on the split images by using the fusion weights, so as to obtain a target image after the Laplacian Pyramid fusion. It is understood that, the embodiment of the present invention includes, but is not limited to, a fusion method using Laplacian Pyramid (LP), and may also be another fusion method, which is not specifically limited herein.
In the image adjustment method in this embodiment, the image to be adjusted is obtained, the image to be adjusted is stretched to obtain the high-bit-width image, and the brightness splitting processing is performed on the high-bit-width image according to the preset gain value to obtain split images with different brightness. And calculating fusion weights of the split images with different brightness according to the pixel values of the split images, and fusing the split images by using the fusion weights to obtain a target image. Because the optimal fusion weight is selected for each pixel when the fusion weight is calculated, the optimal fusion weight can ensure that the brightness and the dynamic state of the finally obtained target image are adjusted to a proper range, namely, the image to be adjusted is split and the split image is fused, so that the brightness and the dynamic state of the image to be adjusted are adjusted to a proper range, and the brightness and the dynamic effect of the adjusted image are effectively improved.
In one embodiment, the step of obtaining the image to be adjusted includes:
respectively calculating the weights of the input first image and the second image to obtain a first weight corresponding to the first image and a second weight corresponding to the second image; wherein the brightness of the first image is higher than the brightness of the second image;
and fusing the first image and the second image based on the first weight and the second weight to obtain an image to be adjusted.
The high dynamic range fused image refers to an image obtained by fusing HDR multiframes, for example, an image obtained by fusing a bright frame and a dark frame of an HDR is a high dynamic range fused image, the image to be adjusted may be a high dynamic range fused image, and the high dynamic range fused image is used for providing luminance information.
Specifically, the terminal may calculate weights of the input first image and the input second image, respectively, to obtain a first weight corresponding to the first image and a second weight corresponding to the second image; wherein the brightness of the first image is higher than the brightness of the second image. For example, the first image may be a bright frame image, the second image may be a dark frame image, and the terminal may calculate weights of the input bright frame image and dark frame image, respectively, to obtain a first weight corresponding to the bright frame image and a second weight corresponding to the dark frame image, where the calculated weights are calculated according to pixel values of the bright frame image and the dark frame image. Further, the terminal can fuse the bright frame image and the dark frame image based on the first weight and the second weight to obtain a high dynamic range fused image, and the high dynamic range fused image is used as an image to be adjusted, so that the high dynamic range fused image can provide more detailed information for subsequent image adjustment, and the adjusted image is better in effect.
In one embodiment, the step of stretching the image to be adjusted to obtain the high-bit-width image includes:
linearly stretching an image to be adjusted to obtain a high-bit-width image;
the method further comprises the following steps:
and carrying out nonlinear tone mapping on the pixel values of the high-bit-width image to obtain the high-bit-width image after brightness adjustment.
After the terminal acquires the image to be adjusted, the terminal can perform linear stretching processing on the image to be adjusted to obtain a high-bit-width image. Specifically, the terminal may transform all points in the image to be adjusted according to a linear transformation function, assuming that the image after HDR fusion is an 8BIT image, the terminal takes the 8BIT image as the image to be adjusted, and when the terminal linearly stretches the image to be adjusted, the fused 8BIT image is linearly stretched to a high BIT image by multiplying a gain value.
For example, if the terminal calculates according to the calculation function of the preset gain value that the corresponding gain value is 4, the terminal multiplies the fused 8BIT image by the gain value of 4, that is, multiplies the image to be adjusted by 4 pixel by pixel, to obtain a 10BIT image. It is understood that the linear calculation method in this embodiment includes, but is not limited to, multiplying by different gain values to obtain images with different BIT BITs, and may be other operation methods.
Further, the terminal may perform nonlinear tone mapping on the pixel values of the high-bit-width image to obtain the high-bit-width image after brightness adjustment. Namely, the terminal can adjust the image brightness by adopting a nonlinear tone mapping mode. For example, the terminal may perform tone mapping by using a Gamma correction method to adjust the brightness of the high bit width image to obtain the high bit width image after adjusting the brightness, and it is understood that other non-linear tone mapping methods may also be used in this embodiment, which is not limited herein.
In this embodiment, the reason for brightening the image to be adjusted to the high BIT image is to avoid an error caused by quantization. Because the calculation is generally carried out in the range of the integer value, in order to prevent the calculation resources from being calculated in a floating point type, better precision can be ensured by carrying out numerical adjustment after the high BIT image is converted, and the brightness and the dynamic state of the image to be adjusted in different scenes can be ensured to be adjusted to be in a proper range.
In one embodiment, the step of performing non-linear tone mapping on pixel values of the high-bit-width image to obtain the high-bit-width image after adjusting the brightness includes:
and correcting the pixel value of the high-bit-width image by using the correction function to obtain the pixel value of the tone-mapped high-bit-width image.
The terminal can perform nonlinear tone mapping on the pixel values of the high-bit-width image to obtain the high-bit-width image with the brightness adjusted. Specifically, the terminal may perform tone mapping by using a Gamma correction method, that is, a Gamma correction formula is used for a pixel Value of the image to obtain a mapped pixel Value OutValue. If the gamma is equal to 1, the non-linear tone mapping is not performed, and the gamma value in this embodiment may be preset. The specific calculation formula is as follows:
OutValue=Valuegamma (1)
wherein the gamma value is a value obtained by multi-scene experimental debugging; value represents a pixel Value of an input image; OutValue represents the mapped pixel value. In this embodiment, the tone mapping using Gamma correction is to brighten some darker areas to a certain extent, and it is understood that other correction functions may also be used in this embodiment, and are not specifically limited herein.
Therefore, after the image to be adjusted is converted into the high BIT image, better precision can be ensured by adjusting the value, and the brightness and the dynamic state of the image to be adjusted in different scenes can be ensured to be adjusted to be in a proper range.
In one embodiment, the step of performing non-linear tone mapping on pixel values of the high-bit-width image to obtain the high-bit-width image after adjusting the brightness includes:
carrying out nonlinear tone mapping on the high bit width image according to a mapping relation between a pixel value before tone mapping and a pixel value after tone mapping, which is configured in advance, so as to obtain the high bit width image after tone mapping; the mapping relation is obtained by a preset curve chart or a mapping table.
The terminal can perform nonlinear tone mapping on the pixel values of the high-bit-width image to obtain the high-bit-width image with the brightness adjusted. Specifically, the terminal may perform nonlinear tone mapping on the high-bit-width image according to a mapping relationship between a pre-configured pixel value before tone mapping and a post-tone-mapping pixel value, or according to a mapping relationship between a pre-configured luminance value before tone mapping and a post-tone-mapping luminance value, to obtain a high-bit-width image after tone mapping; the mapping relation is obtained by a preset curve chart or a mapping table. For example, as shown in fig. 3, which is a graph of pixel values or luminance values, the abscissa in fig. 3 is the original pixel value or original luminance value of the image, and the ordinate is the pixel value or luminance value after tone mapping of the image, the abscissa of the target point a in fig. 3 represents the original pixel value of the high-bit-width image, and the ordinate of the target point a represents the pixel value after tone mapping of the high-bit-width image. Fig. 3 may be a graph obtained according to multiple experiments, an abscissa may be an original pixel value or a luminance value of an image, that is, a pixel value or a luminance value before tone mapping of the image, and an ordinate is a pixel value or a luminance value after tone mapping of the image, and the terminal may perform the nonlinear tone mapping processing through the curve, where, if an abscissa of the target point a in fig. 3 is 4.5, that is, the original pixel value or the luminance value representing the high-bit-width image is 4, the terminal may find that, when the abscissa of the target point a is 4, the ordinate corresponding to the target point a is 6.5 according to the mapping relationship of the curve in fig. 3, that is, the terminal may perform the nonlinear tone mapping on the high-bit-width image according to the mapping relationship, and obtain a pixel value or a luminance value of the high-bit-width image after tone mapping is 6.5. Therefore, after the image to be adjusted is converted into the high BIT image, better precision can be ensured by adjusting the value, and the brightness and the dynamic state of the image to be adjusted in different scenes can be ensured to be adjusted to be in a proper range.
In an embodiment, as shown in fig. 4, the step of splitting the high bit width image according to a preset gain value to obtain split images with different brightness includes:
step 402, obtain the target number of split images.
At step 404, gain values generated based on the target number are obtained.
And 406, splitting the high-bit-width image according to the gain value to obtain split images of the target number after splitting.
The target number refers to the number of split images obtained by splitting the high-bit-width image, for example, if the high-bit-width image is split by 3 luminance values, that is, the number of split images obtained by splitting is 3, the target number of split images is 3.
Specifically, the terminal may obtain a target number of split images obtained by splitting the high-bit-width image, and obtain a gain value generated based on the target number, where the gain value may be automatically generated according to a preset gain function, or may be automatically generated according to a preset gain functionSo that the terminal directly obtains the preset gain value. Further, the terminal can split the high-bit-width image according to the gain value to obtain split images of the split target number. For example, the terminal first obtains the target number of the split images obtained after splitting the high-BIT-width image, and assumes that the split images are split into 3 images with 8 BITs, that is, the terminal obtains the target number of the split images as 3, and then calculates the corresponding gain value, and assumes that the terminal obtains the gain value as 2 according to the calculation of the preset function0、21And 22. Further, the terminal can split the high BIT image according to the gain value obtained by the calculation to obtain 3 8BIT images with different brightness values, namely, a 3-gear multi-frame image with different exposures is simulated. It is understood that, in the embodiment including but not limited to splitting into the remaining number of 8BIT images, for example, splitting into 4 8BIT images, the gain value may be set to 20、22 /3、24/3And 22And the like.
Therefore, the terminal can perform splitting processing according to the high-bit-width image and the corresponding gain value to obtain split images with different brightness values, and the split images can be used for simulating multi-level exposure images, so that more detailed information can be provided, and the output target image can be ensured to have better precision.
In one embodiment, the method further includes a step of obtaining at least two gain values, specifically including:
acquiring at least two gain values; the gain value is generated based on a preset gain function;
and splitting the high-bit-width image according to at least two gain values to obtain at least two split images with different brightness after splitting.
Specifically, the terminal may obtain at least two gain values, where the gain values may be generated based on a preset gain function, and the terminal performs splitting processing on the high-bit-width image according to the at least two gain values to obtain at least two split images with different brightness after the splitting processing. For example, assume that the preset gain function is y-2NWhere N denotes the number of images and y denotesA gain value. For another example, assume that the preset gain function is y ═ ax, where a denotes a preset coefficient, x denotes a luminance value of the image, and y denotes a gain value.
For example, a system can be designed to automatically generate a corresponding gain value according to a calculation formula, such as calculating an average brightness mean (ranging from 0 to 255) of an image, when the brightness value is 50, the terminal automatically calculates to obtain a gain value of 50.0/mean, and the 50.0/mean is a ratio of 50.0 to mean; when the brightness value is 100, the terminal automatically calculates to obtain a gain value of 100.0/mean, wherein the 100.0/mean is the ratio of 100.0 to mean; when the brightness value is 150, the terminal automatically calculates to obtain a gain value of 150.0/mean, wherein the gain value of 150.0/mean is the ratio of 150.0 to mean; when the brightness value is 200, the terminal automatically calculates to obtain a gain value of 200.0/mean, wherein 200.0/mean is the ratio of 200.0 to mean. It is understood that the gain value calculation method in the present embodiment includes, but is not limited to, using the above gain value calculation method, and may be other gain value calculation methods.
In the embodiment, the terminal can automatically calculate the corresponding gain value according to the image to be adjusted, split the image to be adjusted according to the gain value, and provide more brightness information for the subsequent fusion of the split image, so that the image to be adjusted can be adjusted to the proper range of brightness and dynamic, and the brightness and dynamic effect of the adjusted image are effectively improved.
In one embodiment, the method further comprises:
performing brightness splitting processing on the high-bit-width image according to the gain value to obtain a first output pixel value corresponding to the split image after the brightness splitting processing;
calculating to obtain a second output pixel value corresponding to the split image after the brightness split processing according to the bit number of the high-bit width image;
and taking the smaller value of the first output pixel value and the second output pixel value as the pixel value of the split image after the brightness split processing.
The BIT width of the high-BIT-width image refers to a BIT number, for example, the BIT number of the image with the BIT width of 8BIT is 8BIT, and the BIT number of the image with the BIT width of 10BIT is 10 BIT.
Specifically, after the terminal acquires the gain values generated based on the target number, or after the terminal acquires at least two gain values generated based on a preset gain function, the terminal may perform brightness splitting processing on the high-bit-width image according to the gain values to obtain a first output pixel value corresponding to the split image after the brightness splitting processing, and further, the terminal may calculate a second output pixel value corresponding to the split image after the brightness splitting processing according to the number of bits of the high-bit-width image; and the terminal takes the smaller value of the first output pixel value and the second output pixel value as the pixel value of the split image after the brightness split processing, namely the terminal selects the minimum value of the first output pixel value and the second output pixel value and takes the minimum value as the pixel value of the split image after the brightness split processing.
For example, an 8BIT image will be described below as an example. The terminal firstly obtains the target number of split images obtained after splitting the high-BIT-width image, supposing that the split images are split into 3 images with 8 BITs, then the terminal calculates the corresponding gain value, and supposing that the terminal calculates the gain value to be 2 according to the preset function0、21And 22. Further, the terminal can split the high BIT image according to the gain value obtained by the calculation to obtain 3 8BIT images with different brightness, namely, a 3-gear multi-frame image with different exposures is simulated. The specific calculation formula is as follows:
OutValue=min(255,Value/2N) (2)
wherein OutValue is the output pixel Value, Value is the input pixel Value, 2NFor gain values, min is taken as the minimum of the two values. It is understood that 255 in the above formula (2) varies according to the BIT width of the image, and the number of BITs of the 8BIT image is 8BIT, so that the range of the luminance or chrominance values is 0 to 255. If it is a 10BIT image, the range is 0-1023, and equation (2) can be transformed as:
OutValue=min(1023,Value/2N) (3)
assume an input pixel value of 100 for a high bit width image and a gain value of 20Then the terminal calculates according to the above formula (2) to finally obtain the output OutValue=min(255,100/20) 100. I.e. the terminal is based on the gain value 20And performing brightness splitting processing on the high-bit-width image to obtain a first output pixel value (100/2) corresponding to the split image after the brightness splitting processing0) And the terminal calculates to obtain a second output pixel value of 255 corresponding to the split image after the brightness split processing according to the BIT number 8BIT of the high-BIT-width image, selects the minimum value of the first output pixel value and the second output pixel value, and takes the minimum value as the pixel value of the split image after the brightness split processing. Therefore, after the image to be adjusted is converted into the high BIT image, better precision can be ensured by adjusting the value, and the brightness and the dynamic state of the image to be adjusted in different scenes can be ensured to be adjusted to be in a proper range.
In one embodiment, the step of obtaining the target image by calculating a fusion weight of the split image according to the pixel value of the split image and fusing the split image by using the fusion weight includes:
calculating the fusion weight of at least two split images with different brightness;
normalizing the fusion weight to obtain the weight after normalization;
and performing Laplace fusion on the split image according to the weight after the normalization processing to obtain a target image.
Specifically, the terminal calculates the fusion weight of at least two frames of split images with different brightness, normalizes the fusion weight obtained by calculation to obtain the normalized weight, and performs Laplace pyramid fusion on the split images with different brightness by using the normalized weight to obtain a Laplace pyramid fused result image. The weights of each pixel corresponding to the split images with different brightness are normalized, and the specific formula is as follows:
Figure BDA0003321804690000071
wherein weight [ i ] is the weight of the ith frame, and out _ weight [ i ] is the normalized weight of the ith frame, wherein i is an integer greater than 0.
The normalization process is performed on the weight of each pixel corresponding to the split images with different brightness, because the image calculated after the normalization process is a normal image, for example, the calculated weights are 0.8, 0.6, and 0.7, and the corresponding pixel values are 200, 150, and 175, the finally calculated pixel values exceed 255, the BIT number of the 8BIT image is 8BIT, and therefore, the range of the brightness or the chroma value is 0-255, and the obtained image is problematic.
In this embodiment, when the fusion weight is calculated, the optimal fusion weight is selected for each pixel, and the fusion weight is used as an adjustment parameter to adjust the contribution ratio of each split image in the target image, that is, the performance of the image brightness in different scenes can be flexibly controlled by selecting the optimal weight in each split image pixel, and the optimal fusion weight can ensure that the brightness and the dynamic state of the finally obtained target image are adjusted to a proper range, thereby effectively improving the brightness and the dynamic effect of the output target image.
In one embodiment, the step of calculating the fusion weight of at least two split images with different brightness comprises:
and inputting the pixel values of at least two split images with different brightness into a preset weight function, and outputting to obtain a fusion weight corresponding to each split image.
The preset fusion weight function comprises a mean value, a variance and a fusion coefficient of the image, and the fusion coefficient of the image can be set in advance according to experimental data.
Specifically, the terminal can input pixel values of at least two split images with different brightness into a preset weight function, and output to obtain a fusion weight corresponding to each split image; the preset fusion weight function comprises a mean value, a variance and a fusion coefficient of the image. Namely, the terminal calculates corresponding fusion weights for split images with different brightness. The specific way to calculate the weights is as follows:
Figure BDA0003321804690000081
where x is the input pixel value, weight is the weight, and mean and sigma are the weight parameters.
The weight parameters mean and sigma may be specific parameters obtained through experiments, for example, mean-1.5 × M and sigma-2.0 × S may be obtained through experiments, and the physical meanings corresponding to M and S are the mean and standard deviation of the image, and S is the standard deviation of the image2The corresponding physical meaning is the variance of the image, where 1.5 in mean 1.5M is a preset fusion coefficient, and 2.0 in sigma 2.0S is also a preset fusion coefficient.
It is understood that the calculation manner of the fusion weight in the embodiment of the present application includes, but is not limited to, using the above manner, and may also be other manners of calculating the fusion weight.
In the embodiment, the performance of the image brightness under different scenes can be flexibly controlled by selecting the optimal weight in each split image pixel, and the optimal fusion weight can ensure that the brightness and the dynamic state of the finally obtained target image are adjusted to a proper range, so that the brightness and the dynamic effect of the output target image are effectively improved.
In one embodiment, as shown in fig. 5, the split image includes a third image and a fourth image, and the brightness of the third image is higher than that of the fourth image; the step of calculating the fusion weight of at least two split images with different brightness comprises the following steps:
step 502, if the pixel value of the third image is smaller than the preset pixel critical value, calculating to obtain a first fusion weight according to the number of bits of the high-bit-width image, taking the first fusion weight as the fusion weight of the third image, and taking zero as the fusion weight of the fourth image.
Step 504, if the pixel value of the third image is greater than or equal to the preset pixel critical value, calculating to obtain a fusion weight of the fourth image according to the pixel value of the third image and a preset weight coefficient, and calculating to obtain a fusion weight of the third image according to the first fusion weight and the fusion weight of the fourth image.
Specifically, the third image may be a bright frame image, and the fourth image may be a dark frame image, that is, when the split image includes a bright frame image and a dark frame image, and the fusion weight of the two split images with different brightness is calculated, the terminal may determine whether the pixel value of the bright frame image is smaller than a preset pixel critical value; if the pixel value of the bright frame image is smaller than the preset pixel critical value, the terminal calculates to obtain a first fusion weight according to the digit of the high-bit-width image, the first fusion weight is used as the fusion weight of the bright frame image, and zero is used as the fusion weight of the dark frame image; if the pixel value of the bright frame image is larger than or equal to the preset pixel critical value, the terminal calculates the fusion weight of the dark frame image according to the pixel value of the bright frame image and the preset weight coefficient, and calculates the fusion weight of the bright frame image according to the first fusion weight and the fusion weight of the dark frame image.
For example, two 8BIT images with different brightness are taken as an example for explanation. That is, when only two frames of 8BIT images with different brightness exist, the specific calculation method is as follows:
when x <192 pixel value, weight1 is 0, weight2 is 255;
when x is greater than or equal to 192 pixel values, weight1 is (x-192) 4 and weight2 is 255-weight 1.
Where x denotes the pixel value of the bright frame image, weight1 denotes the weight of the dark frame image, and weight2 denotes the weight of the bright frame image.
The design of the above calculation method is that when the bright frame image starts from the preset pixel critical value 192 pixel value and gradually approaches to the overexposure, the weight of the dark frame image is increased, and the overexposure area of the bright frame image uses more dark frame information, so that the overexposure area retains dynamic information.
In this embodiment, by performing laplacian fusion on split images with different brightness, since an optimal weight is selected for each pixel when calculating the fusion weight, the optimal weight can ensure that the brightness and the dynamic state of the finally output target image are both adjusted to a proper range.
In one embodiment, the method provided by the embodiment of the application can be applied to scenes for photographing based on a mobile phone camera. The following describes an image adjustment method provided in an embodiment of the present application, taking a scene photographed by a mobile terminal as an example.
In the conventional method, an image obtained by fusing a bright frame and a dark frame of an HDR is directly subjected to Tonemapping (tone mapping) to adjust brightness, specifically including global Tonemapping or local Tonemapping, so as to obtain a target image with appropriate brightness and dynamic range. The single Tonemapping module is difficult to adjust the brightness and the dynamic range to a proper range, and is difficult to cover a good effect for different scenes. For example, a darker area in a shot scene is highlighted to a suitable degree, but the same module may be too brightly highlighted in a lighter area of the same scene, or may be insufficiently brightened in a darker area, so that it is difficult to cover the adjustment effect of different scenes.
Therefore, the embodiment of the application provides a method for fusing analog exposure and multi-frame Laplace to image compression dynamic and adjusting brightness. As shown in fig. 6, a flowchart of the processing of the HDR-fused image by the mobile terminal is shown. The method comprises the steps that firstly, a mobile terminal obtains an image after HDR fusion as an input image through an HDR (High Dynamic Range image, HDR for short) bright and dark frame fusion module, then the fused image is brightened to a High BIT (BIT width) image through a global brightening module, the fully brightened High BIT image is divided by different gain values to obtain low BIT images with different brightness to simulate multi-level exposure images, and the low BIT images with different brightness are subjected to Laplace fusion to obtain an output image with the brightness and the Dynamic adjusted to a proper Range. The mobile terminal obtains the HDR fused image through the HDR bright and dark frame fusion module, and the HDR fused image can be a bright and dark frame image shot by the mobile terminal in some specific scenes, the weight of the input bright frame image and the weight of the input dark frame image are automatically calculated, and then the HDR fused image is obtained after the bright frame image and the dark frame image are fused by the weight.
In the embodiment of the present application, the global brightening module brightens the fused image to a high BIT (BIT width) image, so as to avoid an error caused by quantization, and to ensure that the high BIT image is not calculated in a floating point type, so that the high BIT image is referred to be numerically processed to ensure better precision. In addition, in some scenes, low BIT images with different brightness can be directly obtained, which may bring some precision loss, and if acceptable, the method can also be adopted.
As shown in fig. 7, a processing flow of the image stretching and brightening module is a processing flow chart of the image stretching and brightening module. The specific treatment steps are as follows:
the HDR fused image is usually an 8BIT image, and the mobile terminal may linearly pull up the obtained fused image to a high BIT image by multiplying a gain value.
2. The mobile terminal may multiply the 8BIT image by a gain value of 4, i.e., multiply the image by 4 pixel by pixel, resulting in a 10BIT image. The embodiment of the present application includes, but is not limited to, multiplying different gain values to obtain images with different BIT BITs for calculation.
3. The mobile terminal adjusts the high BIT image brightness using non-linear tone mapping. The mobile terminal may use a Gamma correction method to perform tone mapping processing, that is, a Gamma correction formula is used for a pixel Value of the image to obtain a mapped pixel Value OutValue. If gamma is equal to 1, the non-linear tone mapping is not performed, and the formula is specifically calculated according to the formula (1). In this embodiment, the tone mapping using Gamma correction is to brighten some darker areas to a certain extent, or the tone mapping may be performed without Gamma correction, or other alternative nonlinear tone mapping manners are adopted, for example, a curve shown in fig. 3 is obtained according to multiple experiments, and the nonlinear tone mapping processing is performed through the curve.
Regarding the processing flow of splitting 8BIT images with different brightness to simulate multi-level exposure, the specific processing steps are as follows:
the mobile terminal firstly calculates different gain values, and the gain values are split into 3 images with 8 BITs, wherein the gain values are designed to be 0 power of 2, 2 power of 2 and 2 power of 2. The mobile terminal splits the high BIT image according to the calculated gain value to obtain 8BIT images with different brightness, and obtains 3 8BIT images with different brightness, namely, a 3-gear multi-frame image with different exposures is simulated, wherein the specific calculation formula is as shown in the formula (2).
As for the processing flow of performing laplacian fusion on 8BIT images with different luminances, as shown in fig. 8, a processing flow chart of performing laplacian fusion on 8BIT images with different luminances is shown. The specific treatment steps are as follows:
1. and the mobile terminal calculates fusion weight aiming at the 8BIT images with different brightness. The weights are calculated as in the aforementioned formula (4). Other alternative ways of calculating the weights, such as when only two frames of 8BIT images with different brightness are provided, the following calculation is designed:
when x <192 pixel value, weight1 is 0, weight2 is 255;
when x is greater than or equal to 192 pixel values, weight1 is (x-192) 4 and weight2 is 255-weight 1.
Where x (bright frame) represents the pixel value of the bright frame image, weight (dark frame) represents the weight of the dark frame image, and weight (bright frame) represents the weight of the bright frame image.
2. The mobile terminal normalizes the weights of each pixel corresponding to the images with different brightness, and the specific way of calculating the weights is as the formula (5).
3. And the mobile terminal performs Laplacian pyramid fusion on the images with different brightness by using the normalized weight to obtain a Laplacian pyramid fused result image.
In this embodiment, laplacian fusion is finally performed on low BIT images with different brightness, and since an optimal weight is selected for each pixel when the fusion weight is calculated, the optimal weight can ensure that the brightness and the dynamic state of the finally obtained target result image are both adjusted to a proper range, that is, the image obtained after HDR multi-frame fusion is adjusted to an image with the brightness and the dynamic state both adjusted to a proper range, so that the brightness and the dynamic effect of the image are effectively improved.
It should be understood that although the various steps in the flow charts of fig. 1-8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-8 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 9 is a block diagram of an image adjusting apparatus according to an embodiment. As shown in fig. 9, there is provided an image adjusting apparatus including: an obtaining module 902, a processing module 904, a splitting processing module 906, a computing module 908, and a fusing module 910, wherein:
an obtaining module 902, configured to obtain an image to be adjusted.
And the processing module 904 is configured to stretch the image to be adjusted to obtain a high-bit-width image.
And the splitting processing module 906 is configured to split the high-bit-width image according to a preset gain value to obtain split images with different brightness.
The calculating module 908 is configured to calculate a fusion weight of the split image according to the pixel value of the split image.
And a fusion module 910, configured to fuse the split images by using the fusion weight to obtain a target image.
In an embodiment, the calculating module is further configured to calculate weights of the input first image and the input second image, respectively, to obtain a first weight corresponding to the first image and a second weight corresponding to the second image, where the brightness of the first image is higher than the brightness of the second image. The fusion module is further used for fusing the first image and the second image based on the first weight and the second weight to obtain an image to be adjusted. In one embodiment, the processing module is further configured to perform linear stretching on the image to be adjusted to obtain a high-bit-width image; and carrying out nonlinear tone mapping on the pixel values of the high bit width image to obtain the high bit width image after brightness adjustment.
In an embodiment, the calculation module is further configured to correct the pixel value of the high-bit-width image by using a correction function, so as to obtain the pixel value of the tone-mapped high-bit-width image.
In one embodiment, the processing module is further configured to perform nonlinear tone mapping on the high-bit-width image according to a mapping relationship between a pre-configured pixel value before tone mapping and a post-tone-mapping pixel value, so as to obtain a tone-mapped high-bit-width image; the mapping relation is obtained by a preset curve chart or a mapping table.
In one embodiment, the obtaining module is further configured to obtain a target number of split images; gain values generated based on the target number are obtained. The processing module is further used for splitting the high-bit-width image according to the gain value to obtain split images of the split target number.
In one embodiment, the obtaining module is further configured to obtain at least two gain values, and the gain values are generated based on a preset gain function. The processing module is further configured to split the high-bit-width image according to the at least two gain values to obtain at least two split images with different brightness after the splitting.
In one embodiment, the apparatus further comprises: and selecting a module.
The splitting processing module is further used for carrying out brightness splitting processing on the high-bit-width image according to the gain value to obtain a first output pixel value corresponding to the split image after the brightness splitting processing. The calculation module is further configured to calculate a second output pixel value corresponding to the split image after the luminance split processing according to the bit number of the high-bit-width image. The selecting module is used for taking the smaller value of the first output pixel value and the second output pixel value as the pixel value of the split image after the brightness split processing.
In one embodiment, the calculation module is further configured to calculate a fusion weight of the split images of at least two frames with different brightness. The processing module is further used for carrying out normalization processing on the fusion weight to obtain the weight after the normalization processing. And the fusion module is also used for carrying out Laplace fusion on the split image according to the weight after the normalization processing to obtain a target image.
In one embodiment, the apparatus further comprises: and an input module.
The input module is used for inputting the pixel values of at least two split images with different brightness into a preset weight function and outputting to obtain the fusion weight corresponding to each split image.
In an embodiment, the calculation module is further configured to calculate a first fusion weight according to the number of bits of the high-bit-width image if the pixel value of the third image is smaller than a preset pixel critical value, use the first fusion weight as the fusion weight of the third image, and use zero as the fusion weight of the fourth image; and if the pixel value of the third image is greater than or equal to the preset pixel critical value, calculating to obtain the fusion weight of the fourth image according to the pixel value of the third image and the preset weight coefficient, and calculating to obtain the fusion weight of the third image according to the first fusion weight and the fusion weight of the fourth image.
The division of the modules in the image adjusting apparatus is merely for illustration, and in other embodiments, the image adjusting apparatus may be divided into different modules as needed to complete all or part of the functions of the image adjusting apparatus.
For specific limitations of the image adjusting apparatus, reference may be made to the above limitations of the image adjusting method, which are not described herein again. The respective modules in the image adjusting apparatus may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 10 is a schematic diagram of an internal structure of an electronic device in one embodiment. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and a wearable device. The electronic device includes a processor and a memory connected by a system bus. The processor may include one or more processing units, among others. The processor may be a CPU (Central Processing Unit), a DSP (Digital Signal processor), or the like. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement an image adjustment method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium.
The implementation of each module in the image adjusting apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. Program modules constituted by such computer programs may be stored on the memory of the electronic device. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image adjustment method.
Embodiments of the present application also provide a computer program product containing instructions that, when run on a computer, cause the computer to perform an image adjustment method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. The nonvolatile Memory may include a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory), an EPROM (Erasable Programmable Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), or a flash Memory. Volatile Memory can include RAM (Random Access Memory), which acts as external cache Memory. By way of illustration and not limitation, RAM is available in many forms, such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), SDRAM (Synchronous Dynamic Random Access Memory), Double Data Rate DDR SDRAM (Double Data Rate Synchronous Random Access Memory), ESDRAM (Enhanced Synchronous Dynamic Random Access Memory), SLDRAM (Synchronous Link Dynamic Random Access Memory), RDRAM (Random Dynamic Random Access Memory), and DRmb DRAM (Dynamic Random Access Memory).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (14)

1. An image adjustment method, comprising:
acquiring an image to be adjusted;
stretching the image to be adjusted to obtain a high-bit-width image;
splitting the high-bit-width image according to a preset gain value to obtain split images with different brightness;
and calculating to obtain a fusion weight of the split image according to the pixel value of the split image, and fusing the split image by using the fusion weight to obtain a target image.
2. The method of claim 1, wherein the obtaining the image to be adjusted comprises:
respectively calculating the weight of an input first image and the weight of an input second image to obtain a first weight corresponding to the first image and a second weight corresponding to the second image; wherein the brightness of the first image is higher than the brightness of the second image;
and fusing the first image and the second image based on the first weight and the second weight to obtain the image to be adjusted.
3. The method according to claim 1, wherein the stretching the image to be adjusted to obtain a high-bit-width image comprises:
linearly stretching the image to be adjusted to obtain a high-bit-width image;
the method further comprises the following steps:
and carrying out nonlinear tone mapping on the pixel values of the high-bit-width image to obtain the high-bit-width image with the brightness adjusted.
4. The method according to claim 3, wherein the performing non-linear tone mapping on the pixel values of the high-bit-width image to obtain the adjusted-brightness high-bit-width image comprises:
and correcting the pixel value of the high-bit-width image by using a correction function to obtain the pixel value of the tone-mapped high-bit-width image.
5. The method according to claim 3, wherein the performing non-linear tone mapping on the pixel values of the high-bit-width image to obtain the adjusted-brightness high-bit-width image comprises:
carrying out nonlinear tone mapping on the high-bit-width image according to a mapping relation between a pre-configured pixel value before tone mapping and a pixel value after tone mapping to obtain a high-bit-width image after tone mapping; the mapping relation is obtained by a preset curve chart or a preset mapping table.
6. The method according to claim 1, wherein the splitting the high bit width image according to a preset gain value to obtain split images with different brightness includes:
acquiring the target number of the split images;
obtaining a gain value generated based on the target number;
and splitting the high-bit-width image according to the gain value to obtain split images of the target number after splitting.
7. The method of claim 6, further comprising:
acquiring at least two gain values; the gain value is generated based on a preset gain function;
and splitting the high-bit-width image according to at least two gain values to obtain at least two split images with different brightness after splitting.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
performing brightness splitting processing on the high-bit-width image according to the gain value to obtain a first output pixel value corresponding to the split image after the brightness splitting processing;
calculating to obtain a second output pixel value corresponding to the split image after the brightness split processing according to the bit number of the high-bit width image;
and taking the smaller value of the first output pixel value and the second output pixel value as the pixel value of the split image after the brightness split processing.
9. The method according to claim 1, wherein the calculating a fusion weight of the split image according to the pixel values of the split image, and fusing the split image by using the fusion weight to obtain a target image comprises:
calculating the fusion weight of the split images with different brightness of at least two frames;
normalizing the fusion weight to obtain a normalized weight;
and performing Laplace fusion on the split image according to the weight after the normalization processing to obtain a target image.
10. The method according to claim 9, wherein the calculating the fusion weight of the split image of at least two frames with different brightness comprises:
and inputting the pixel values of the split images with different brightness of at least two frames into a preset weight function, and outputting to obtain the fusion weight corresponding to each frame of the split images.
11. The method according to claim 9, wherein the split image comprises a third image and a fourth image, and the brightness of the third image is higher than the brightness of the fourth image;
the calculating the fusion weight of the split images with different brightness of at least two frames comprises:
if the pixel value of the third image is smaller than a preset pixel critical value, calculating to obtain a first fusion weight according to the digit of the high-bit-width image, taking the first fusion weight as the fusion weight of the third image, and taking zero as the fusion weight of the fourth image;
if the pixel value of the third image is larger than or equal to a preset pixel critical value, calculating to obtain a fusion weight of the fourth image according to the pixel value of the third image and a preset weight coefficient, and calculating to obtain the fusion weight of the third image according to the first fusion weight and the fusion weight of the fourth image.
12. An image adjusting apparatus, comprising:
the acquisition module is used for acquiring an image to be adjusted;
the processing module is used for stretching the image to be adjusted to obtain a high-bit-width image;
the splitting processing module is used for splitting the high-bit-width image according to a preset gain value to obtain split images with different brightness;
the calculation module is used for calculating the fusion weight of the split image according to the pixel value of the split image;
and the fusion module is used for fusing the split images by using the fusion weight to obtain a target image.
13. An electronic device comprising a memory and a processor, the memory having a computer program stored thereon, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the image adjustment method according to any one of claims 1 to 11.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
CN202111248981.8A 2021-10-26 2021-10-26 Image adjusting method and device, electronic equipment and computer readable storage medium Pending CN113920031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111248981.8A CN113920031A (en) 2021-10-26 2021-10-26 Image adjusting method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111248981.8A CN113920031A (en) 2021-10-26 2021-10-26 Image adjusting method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113920031A true CN113920031A (en) 2022-01-11

Family

ID=79242915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111248981.8A Pending CN113920031A (en) 2021-10-26 2021-10-26 Image adjusting method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113920031A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107534735A (en) * 2016-03-09 2018-01-02 华为技术有限公司 Image processing method, device and the terminal of terminal
CN109727215A (en) * 2018-12-28 2019-05-07 Oppo广东移动通信有限公司 Image processing method, device, terminal device and storage medium
GB201919384D0 (en) * 2019-01-30 2020-02-05 Canon Kk Image processing apparatus, image processing method, program, and storage medium
CN111105359A (en) * 2019-07-22 2020-05-05 浙江万里学院 Tone mapping method for high dynamic range image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107534735A (en) * 2016-03-09 2018-01-02 华为技术有限公司 Image processing method, device and the terminal of terminal
CN109727215A (en) * 2018-12-28 2019-05-07 Oppo广东移动通信有限公司 Image processing method, device, terminal device and storage medium
GB201919384D0 (en) * 2019-01-30 2020-02-05 Canon Kk Image processing apparatus, image processing method, program, and storage medium
CN111105359A (en) * 2019-07-22 2020-05-05 浙江万里学院 Tone mapping method for high dynamic range image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵爱玲: "工业X射线图像增强算法研究", 《中国优秀硕士学位论文全文数据库》, no. 09, 15 September 2021 (2021-09-15), pages 138 - 544 *

Similar Documents

Publication Publication Date Title
US10074165B2 (en) Image composition device, image composition method, and recording medium
CN110009587B (en) Image processing method, image processing device, storage medium and electronic equipment
CN106981054B (en) Image processing method and electronic equipment
JP6849696B2 (en) A low-cost color extension module that extends the color of an image
US20200193578A1 (en) Method and system for image enhancement
CN109413335B (en) Method and device for synthesizing HDR image by double exposure
CN109817170B (en) Pixel compensation method and device and terminal equipment
CN109785239B (en) Image processing method and device
WO2012015020A1 (en) Method and device for image enhancement
US8666151B2 (en) Apparatus and method for enhancing visibility of color image
CN116113976A (en) Image processing method and device, computer readable medium and electronic equipment
CN110473156B (en) Image information processing method and device, storage medium and electronic equipment
CN113781358B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN115147304A (en) Image fusion method and device, electronic equipment, storage medium and product
CN113920031A (en) Image adjusting method and device, electronic equipment and computer readable storage medium
CN108401119B (en) Image processing method, mobile terminal and related medium product
CN107454340B (en) Image synthesis method and device based on high dynamic range principle and mobile terminal
CN115205168A (en) Image processing method, device, electronic equipment, storage medium and product
CN111583163B (en) AR-based face image processing method, device, equipment and storage medium
CN115375780A (en) Color difference calculation method and device, electronic equipment, storage medium and product
CN114422721A (en) Imaging method, imaging device, electronic equipment and storage medium
CN116167926A (en) Model training method and contrast adjustment method
CN114066783A (en) Tone mapping method, tone mapping device, electronic equipment and storage medium
CN109146815B (en) Image contrast adjusting method and device and computer equipment
CN112887597A (en) Image processing method and device, computer readable medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination