Nothing Special   »   [go: up one dir, main page]

CN113676675A - Image generation method and device, electronic equipment and computer-readable storage medium - Google Patents

Image generation method and device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN113676675A
CN113676675A CN202110937243.8A CN202110937243A CN113676675A CN 113676675 A CN113676675 A CN 113676675A CN 202110937243 A CN202110937243 A CN 202110937243A CN 113676675 A CN113676675 A CN 113676675A
Authority
CN
China
Prior art keywords
color
pixel
image
panchromatic
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110937243.8A
Other languages
Chinese (zh)
Other versions
CN113676675B (en
Inventor
杨鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110937243.8A priority Critical patent/CN113676675B/en
Publication of CN113676675A publication Critical patent/CN113676675A/en
Application granted granted Critical
Publication of CN113676675B publication Critical patent/CN113676675B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The application relates to an image generation method, an image generation device, a computer device and a storage medium. The method comprises the following steps: exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to the single-color pixel points and panchromatic photosensitive data corresponding to the panchromatic pixel points; responding to a first-stage merging instruction, merging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and merging two panchromatic photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first panchromatic pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other; a first target image is generated based on each of the first single-color pixels and each of the first panchromatic pixels. By adopting the method, the image can be generated more accurately.

Description

Image generation method and device, electronic equipment and computer-readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image generation method and apparatus, an electronic device, and a computer-readable storage medium.
Background
In more and more electronic devices, a camera is installed to realize a photographing function. The camera is provided with an image sensor, and images are collected and generated through the image sensor.
However, the conventional image generation method generally displays the acquired RGB (Red, Green, Blue, Red, Green, Blue) image on a screen through an image sensor of RGB, and has a problem that the generated image is inaccurate.
Disclosure of Invention
The embodiment of the application provides an image generation method and device, electronic equipment and a computer readable storage medium, which can generate an image more accurately.
An image generation method applied to an electronic device including an image sensor, the image sensor including a pixel point array, the pixel point array including minimum pixel point repeating units, each of the minimum pixel point repeating units including a plurality of pixel point sub-units, each of the pixel point sub-units including a plurality of single-color pixel points and a plurality of panchromatic pixel points, the single-color pixel points and the panchromatic pixel points in the pixel point sub-units being alternately arranged in both a row direction and a column direction; the method comprises the following steps:
exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to the single-color pixel points and panchromatic photosensitive data corresponding to the panchromatic pixel points;
responding to a first-stage merging instruction, merging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and merging two panchromatic photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first panchromatic pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other;
a first target image is generated based on each of the first single-color pixels and each of the first panchromatic pixels.
An image generating apparatus applied to an electronic device including an image sensor, the image sensor including a pixel dot array including minimum pixel dot repeating units, each of the minimum pixel dot repeating units including a plurality of pixel dot sub-units, each of the pixel dot sub-units including a plurality of single-color pixel dots and a plurality of panchromatic pixel dots, the single-color pixel dots and the panchromatic pixel dots in the pixel dot sub-units being alternately arranged in both a row direction and a column direction; the device comprises:
the exposure module is used for exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to the single-color pixel point and panchromatic photosensitive data corresponding to the panchromatic pixel point;
the merging module is used for responding to a first-stage merging instruction, merging the two single-color photosensitive data obtained in the first diagonal direction in the pixel point array to obtain a first single-color pixel, and merging the two panchromatic photosensitive data obtained in the second diagonal direction in the pixel point array to obtain a first panchromatic pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other;
an image generation module to generate a first target image based on each of the first single-color pixels and each of the first panchromatic pixels.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the image generation method as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.
The image generation method, the image generation device, the electronic device and the computer-readable storage medium are characterized in that the electronic device comprises an image sensor, the image sensor comprises a pixel point array, the pixel point array comprises minimum pixel point repeating units, each minimum pixel point repeating unit comprises a plurality of pixel point sub-units, each pixel point sub-unit comprises a plurality of single-color pixel points and a plurality of panchromatic pixel points, the single-color pixel points and the panchromatic pixel points in the pixel point sub-units are alternately arranged in the row direction and the column direction, and then higher light incoming quantity can be received through the panchromatic pixel points. The electronic equipment exposes each pixel point in the pixel point array to obtain single-color photosensitive data and panchromatic photosensitive data, and then the single-color photosensitive data and the panchromatic photosensitive data are respectively combined to obtain a first single-color pixel and a first panchromatic pixel, wherein the first panchromatic pixel has a higher light inlet quantity, so that the higher light inlet quantity can be obtained based on the first color pixel and the first panchromatic pixel, the overall brightness of an image is improved, the image quality signal to noise ratio of the image is improved, and a first target image is generated more accurately.
In the process of pixel combination, the electronic device combines two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and combines two panchromatic photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first panchromatic pixel, wherein each pixel is obtained by combining the two photosensitive data, and meanwhile, the target image can be output more quickly, the sensitivity of image output is improved, and the image information and the output speed of the target image are taken into account.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of an electronic device in one embodiment;
FIG. 2 is an exploded view of an image sensor in one embodiment;
FIG. 3 is a schematic diagram of the arrangement of the minimum color filter repeating units in one embodiment;
FIG. 4 is a flow diagram of a method of image generation in one embodiment;
FIG. 5 is a diagram illustrating a one-level merge approach in one embodiment;
FIG. 6 is a diagram illustrating a two-stage merge approach in one embodiment;
FIG. 7 is a schematic diagram of a full output mode in one embodiment;
FIG. 8 is a schematic illustration of image generation in another embodiment;
FIG. 9 is a schematic illustration of image generation in another embodiment;
FIG. 10 is a schematic illustration of image generation in another embodiment;
FIG. 11 is a block diagram showing the configuration of an image generating apparatus according to an embodiment;
fig. 12 is a schematic diagram of an internal structure of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
As shown in fig. 1, the electronic device includes a camera 102, and the camera 102 includes an image sensor including a microlens array, a color filter array, and a pixel dot array.
The electronic device is described below as a mobile phone, but the electronic device is not limited to a mobile phone. The terminal comprises a camera, a processor and a shell. The camera and the processor are arranged in the shell, and the shell can also be used for installing functional modules such as a power supply device and a communication device of the terminal, so that the shell provides protection such as dust prevention, falling prevention and water prevention for the functional modules.
The camera may be a front camera, a rear camera, a side camera, a screen camera, etc., without limitation. The camera includes a lens and an image sensor, and when the camera is used for shooting an image, light passes through the lens and reaches the image sensor, and the image sensor is used for converting a light signal irradiated on the image sensor 21 into an electric signal.
As shown in fig. 2, the image sensor includes a microlens array 21, a color filter array 22, and a pixel point array 23.
The micro lens array 21 comprises a plurality of micro lenses 211, the micro lenses 211, color filters in the color filter array 22 and pixel points in the pixel point array 23 are arranged in a one-to-one correspondence mode, the micro lenses 211 are used for gathering incident light, the gathered light can penetrate through the corresponding color filters and then is projected to the pixel points to be received by the corresponding pixel points, and the pixel points convert the received light into electric signals.
The color filter array 22 includes a plurality of minimum color filter repeating units 221. Each minimum color filter repeating unit 221 includes a plurality of color filter sub-units 222. In the present embodiment, the minimum color filter repeating unit 221 includes 4 color filter sub-units 222, and the 4 color filter sub-units 222 are arranged in a matrix. Each color filter subunit 222 includes a plurality of single-color filters 223 and a plurality of panchromatic filters 224. Different single color filters 223 are also included in different color filter subunits 222. For example, the single color filter included in the filter subunit a is a red filter, and the single color filter included in the filter subunit B is a green filter. Each color filter in the color filter array 22 corresponds to each pixel point in the pixel point array 23 one by one, and the light filtered by the color filter is projected to the corresponding pixel point to obtain photosensitive data.
A single color filter refers to a filter that is capable of transmitting a single color of light. The single-color filter may specifically be a Red (Red, R) filter, a Green (Green, G) filter, a Blue (Blue, B) filter, or the like. Wherein, red color filter can see through red light, green color filter can see through green light, and blue color filter can see through blue light.
A full color filter is a filter that transmits all color light. The full-color filter may be a White (W) filter. It will be appreciated that a full colour filter can transmit white light, which is the light projected by the wavelengths of the various wavelength bands together, i.e. a full colour filter can transmit light of all colours.
Similarly, the pixel array 23 includes minimum pixel repeating units 231, each minimum pixel repeating unit 231 includes a plurality of pixel sub-units 232, each pixel sub-unit 232 includes a plurality of single-color pixels 233 and a plurality of panchromatic pixels 234, and the single-color pixels 233 and the panchromatic pixels 234 in the pixel sub-units 232 are alternately arranged in the row direction and the column direction. Each pixel point sub-unit 232 corresponds to one color filter sub-unit 222, and the pixel points in each pixel point sub-unit 232 also correspond to the color filters in the corresponding color filter sub-units 222 one to one. The light transmitted by the panchromatic filter 224 is projected to the panchromatic pixel points 234, and panchromatic sensitization data can be obtained; the light transmitted through the single color filter 223 is projected to the single color pixel 233, and single color sensitive data can be obtained. In the present embodiment, the minimum pixel point repeating unit 231 includes 4 pixel point sub-units 232, and the 4 pixel point sub-units 232 are arranged in a matrix.
A single color pixel is a pixel that receives a single color light. The single-color filter may be specifically a Red (R) pixel, a Green (G) pixel, or a Blue (B) pixel. Panchromatic pixels are pixels that receive all color light. The panchromatic pixels may be White (W) pixels.
In one embodiment, the minimum filter repeat unit is 8 rows and 8 columns of 64 filters arranged in the following manner:
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
where w1 denotes a full-color filter, and a1, b1, and c1 each denote a single-color filter. The full-color filters in each minimum filter repeating unit included in the color filter array are uniformly distributed, so that the light incoming quantity of each position can be uniformly increased, and the signal-to-noise ratio of a generated image is integrally improved; and the color filter array formed by the minimum color filter repeating units in the arrangement mode can also ensure that the stability of an algorithm for processing the image is higher, and the effect of the processed image is better.
Fig. 3 is a schematic diagram of the arrangement of the minimum color filter repeating units in one embodiment. Where w1 represents a full-color filter and represents 50% of the minimum filter repeat unit, a1, b1 and c1 each represent a single-color filter, a1 and c1 represent 12.5% of the minimum filter repeat unit, and b1 represents 25% of the minimum filter repeat unit.
In one embodiment, w1 may represent a white color filter, and a1, b1, and c1 may represent a red color filter, a green color filter, and a blue color filter, respectively. In other embodiments, w1 may represent a white color filter, and a1, b1, and c1 may represent a cyan color filter, a magenta color filter, and a yellow color filter, respectively.
For example, w1 may represent a white color filter, a1 a red color filter, b1 a green color filter, and c1 a blue color filter. For another example, w1 may represent a white color filter, a1 a green color filter, b1 a red color filter, and c1 a blue color filter.
It should be noted that the order of the panchromatic filter and the single-color filter can be set according to the requirement, for example, the panchromatic filter can be placed before the single-color filter, or can be placed after the single-color filter. The order between the individual color filters can also be adjusted as desired. The order of the various pixels obtained by the single color filter may be set as needed, and is not limited.
In the present embodiment, the minimum color filter repeating unit includes a 50% full-color filter, and the light entering amount of the image sensor can be increased as much as possible, so that more photosensitive data can be acquired.
FIG. 4 is a flow diagram of a method for image generation in one embodiment. The image generation method in this embodiment is described by taking the electronic device in fig. 1 as an example. As shown in fig. 4, the image generating method includes steps 402 to 406.
Step 402, exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to the single-color pixel point and panchromatic photosensitive data corresponding to the panchromatic pixel point.
The photosensitive data is data obtained by converting optical signals into electric signals after the pixels receive light. The single-color light sensing data refers to data obtained by receiving single-color light by a single-color pixel point and converting a signal of the single-color light into an electric signal. For example, the single color exposure data may include a brightness value, an exposure time period, a gray value, or the like of the single color light. The panchromatic photosensitive data is data obtained by receiving all color light rays by a panchromatic pixel point and converting signals of all the color light rays into electric signals. For example, the full color sensitive data may include brightness values of all color lights, exposure time periods or gray scale values, and the like.
Specifically, the electronic device obtains a preset exposure parameter, and exposes each pixel point in the pixel point array according to the preset exposure parameter to obtain single-color photosensitive data corresponding to the single-color pixel point and panchromatic photosensitive data corresponding to the panchromatic pixel point. The preset exposure parameters at least comprise aperture, shutter speed and sensitivity.
Step 404, in response to a first-stage merging instruction, merging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and merging two panchromatic photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first panchromatic pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other.
The first diagonal direction is a direction for merging single-color sensitive data. The second diagonal direction is a direction for merging full-color photoreception data. In one embodiment, the first diagonal direction is the direction indicated by the line connecting the upper left corner and the lower right corner, and the second diagonal direction is the direction indicated by the line connecting the upper right corner and the lower left corner. In another embodiment, the first diagonal direction is the direction indicated by the line connecting the upper right corner and the lower left corner, and the second diagonal direction is the direction indicated by the line connecting the upper left corner and the lower right corner.
The first single-color pixel is a pixel obtained by combining two single-color sensitive data in the first diagonal direction. The first panchromatic pixel is a pixel resulting from the combination of the two panchromatic exposure data for the second diagonal direction.
The same single-color pixel points in the pixel point array are arranged in a first diagonal direction, and the panchromatic pixel points are arranged in a second diagonal direction.
The electronic equipment responds to a primary combination instruction, sequentially obtains two same single-color photosensitive data in a first diagonal direction according to the arrangement sequence of same single-color pixel points in the pixel point array, and combines the two same single-color photosensitive data to obtain a first single-color pixel; and sequentially acquiring two panchromatic photosensitive data in a second diagonal direction according to the arrangement sequence of the panchromatic pixel points in the pixel point array, and merging the two panchromatic photosensitive data to obtain the first panchromatic pixel. The first-stage merging instruction is an instruction corresponding to merging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array and merging two full-color photosensitive data obtained in a second diagonal direction in the pixel point array.
A first target image is generated based on each first single-color pixel and each first panchromatic pixel, step 406.
The first target image is an image generated based on the first single-color pixels and the first panchromatic pixels.
In one embodiment, the electronics combine each first single-color pixel and each first panchromatic pixel to generate a first target image.
In another embodiment, the electronic device performs a filtering process on each of the first single-color pixels and each of the first panchromatic pixels, and combines the filtered first single-color pixels and the filtered first panchromatic pixels to generate the first target image. The electronic device performs filtering processing on each first single-color pixel and each first panchromatic pixel, so that noise in the first single-color pixel and the first panchromatic pixel can be filtered out, and the first target image can be generated more accurately.
In another embodiment, the electronics can also process the designated first single-color pixel or the designated first panchromatic pixel to generate a first target image. The processing may include filter processing, deletion processing, interpolation processing, pixel value adjustment processing, or the like.
In other embodiments, the electronic device may also generate the first target image in other manners, which is not limited herein.
According to the image generation method, the electronic device comprises the image sensor, the image sensor comprises the pixel point array, the pixel point array comprises the minimum pixel point repeating unit, each minimum pixel point repeating unit comprises the pixel point sub-units, each pixel point sub-unit comprises the single-color pixel points and the panchromatic pixel points, the single-color pixel points and the panchromatic pixel points in the pixel point sub-units are arranged alternately in the row direction and the column direction, and then the panchromatic pixel points can receive higher light incoming quantity. The electronic equipment exposes each pixel point in the pixel point array to obtain single-color photosensitive data and panchromatic photosensitive data, and then the single-color photosensitive data and the panchromatic photosensitive data are respectively combined to obtain a first single-color pixel and a first panchromatic pixel, wherein the first panchromatic pixel has a higher light inlet quantity, so that the higher light inlet quantity can be obtained based on the first color pixel and the first panchromatic pixel, the overall brightness of an image is improved, the image quality signal to noise ratio of the image is improved, and a first target image is generated more accurately.
In the process of pixel combination, the electronic device combines two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and combines two panchromatic photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first panchromatic pixel, wherein each pixel is obtained by combining the two photosensitive data, and meanwhile, the target image can be output more quickly, the sensitivity of image output is improved, and the image information and the output speed of the target image are taken into account.
FIG. 5 is a diagram illustrating a one-level merge approach in one embodiment. 502 is a first diagonal direction in the pixel point array 23, 504 is a second diagonal direction in the pixel point array 23, the electronic device merges two single-color photosensitive data obtained from the first diagonal direction 502 in the pixel point array 23 to obtain a first single-color pixel, and merges two panchromatic photosensitive data obtained from the second diagonal direction 504 in the pixel point array 23 to obtain a first panchromatic pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other; thereby generating a first target image 506 based on the first single-color pixels and the first panchromatic pixels.
In one embodiment, combining two single-color sensitive data obtained from a first diagonal direction in a pixel dot array to obtain a first single-color pixel, and combining two panchromatic sensitive data obtained from a second diagonal direction in the pixel dot array to obtain a first panchromatic pixel includes: adding or averaging two single-color photosensitive data obtained in a first diagonal direction in a pixel point array to obtain a first single-color pixel; and adding or averaging the two panchromatic photosensitive data obtained in the second diagonal direction in the pixel point array to obtain a first panchromatic pixel.
For example, the electronics acquire two single-color exposure data for a first diagonal direction in the pixel dot array, a1 and a2, respectively, and add a1 and a2 to yield a first single-color pixel as a1+ a 2.
For another example, the electronic device obtains two panchromatic exposure data in a second diagonal direction in the pixel dot array, which are A3 and a4, and averages A3 and a4 to obtain a first panchromatic pixel of (A3+ a 4)/2.
In another embodiment, the electronic device may further perform weighted average on two single-color sensitive data obtained in the first diagonal direction in the pixel point array to obtain a first single-color pixel; and carrying out weighted average on the two panchromatic photosensitive data obtained in the second diagonal direction in the pixel point array to obtain a first panchromatic pixel.
For example, the electronic device obtains two single-color exposure data in a first diagonal direction in the pixel dot array, wherein the two single-color exposure data are respectively a5 and a6, the weighting factor corresponding to a5 is a, the weighting factor corresponding to a6 is b, and the weighted average of a5 and a6 is performed to obtain a first full-color pixel of (a5 a + a6 b)/2.
In another embodiment, the electronic device may further obtain two single-color sensitive data obtained in a first diagonal direction in the pixel point array, and select a target single-color sensitive data from the two single-color sensitive data as a first single-color pixel; and acquiring two panchromatic photosensitive data obtained in a second diagonal direction in the pixel point array, and selecting target panchromatic photosensitive data from the two panchromatic photosensitive data to serve as a first panchromatic pixel. Optionally, the obtaining mode of the target single-color photosensitive data at least comprises random selection, manual selection, selection of a larger value or selection of a smaller value and the like; the acquisition mode of the target panchromatic photosensitive data at least comprises random selection, manual selection, selection of a large value or selection of a small value and the like.
In one embodiment, after generating the first target image, the method further includes: determining pixels required by the current position from a Bayer array image to be generated in sequence; generating a first panchromatic channel image and a plurality of first single-color channel images according to a first target image, determining a target single-color channel image from each first single-color channel image based on pixels required by a current position, extracting the first single-color pixels from corresponding positions of the target single-color channel image as pixels of the current position in a Bayer array image to be generated until pixels of all positions in the Bayer array image to be generated are generated, and obtaining the Bayer array image; the pixels in the first panchromatic channel image are all first panchromatic pixels, and the pixels in each first single-color channel image are all same type of first single-color pixels.
The pixels in the first panchromatic channel map are all first panchromatic pixels, and the pixels in each first single-color channel map are all same kind of first single-color pixels. For example, the first panchromatic channel map is a W (White) channel map, and the pixels in the W channel map are all White pixels. The first single color channel map is an R (Red) channel map, and the pixels in the R channel map are all Red pixels.
In one embodiment, the electronics can extract the same type of pixels from the first target image, respectively, to generate a first panchromatic channel map and a plurality of first single-color channel maps. In another embodiment, the electronic device can split the first target image to obtain a first panchromatic channel map and a plurality of first single-color channel maps. In another embodiment, the electronics can also interpolate a first panchromatic channel map and a first single-color channel map for the same pixel based on the pixels in the first target image. In other embodiments, the electronic device may generate the first panchromatic channel map and the plurality of first single-color channel maps in other manners, which are not limited herein.
For example, the first target image is an RGBW image, the electronic device may generate an R-channel map, a G-channel map, a B-channel map, and a W-channel map based on the first target image. The R-channel image comprises R pixels, the G-channel image comprises G pixels, the B-channel image comprises B pixels, and the W-channel image comprises W pixels.
The bayer array is a technology for simulating the sensitivity of human eyes to colors, and converts gray scale information into color information by using an arrangement mode of 1 red, 2 green and 1 blue, and is one of the main technologies for realizing the shooting of color images by a CCD (Charge-coupled Device) or CMOS (Complementary Metal Oxide Semiconductor) sensor.
The target single color channel map is a single color channel map that is consistent with the channel map to which the desired pixel for the current position belongs. For example, if the pixel required for the current position (2,5) in the bayer array image to be generated is a G pixel, the target single-color channel map is a G channel map, and the G pixel is extracted from (2,5) of the G channel map as the pixel of the current position (2,5) in the bayer array image to be generated.
As another example, if the pixel required for the current position (100,212) in the bayer array image to be generated is an R pixel, the target single-color channel map is an R channel map, and the R pixel is extracted from (100,212) of the R channel map as the pixel of the current position (100,212) in the bayer array image to be generated.
In the embodiment, pixels required by the current position are sequentially determined from a Bayer array image to be generated; and determining a target single-color channel map from each first single-color channel map based on the pixels required by the current position, wherein the target single-color channel map is one of the first single-color channel maps, extracting the first single-color pixels from the corresponding positions of the target single-color channel map as the pixels at the current position in the Bayer array image to be generated until the pixels at all positions in the Bayer array image to be generated are generated, so that the Bayer array image can be accurately generated, meanwhile, the Bayer array image can be input to a processor aiming at Bayer array processing subsequently, and the image sensor containing panchromatic pixel points and the processor aiming at Bayer array processing are compatible.
In another embodiment, after the generating the first target image, the method further includes: generating a first panchromatic channel image and a plurality of first single-color channel images according to the first target image, and combining the first single-color channel images to generate a Bayer array image; the pixels in the first panchromatic channel image are all first panchromatic pixels, and the pixels in each first single-color channel image are all same type of first single-color pixels.
For example, if the first target image is an RGBW image, the electronic device may generate an R-channel image, a G-channel image, a B-channel image, and a W-channel image based on the first target image, and then combine the R-channel image, the G-channel image, and the B-channel image to generate a bayer array image, that is, an RGB image.
In one embodiment, photosensitive data obtained by exposure of each minimum pixel point repeating unit are combined to obtain a first pixel area in a first target image; the first pixel area is 32 pixels with 4 rows and 8 columns, and the arrangement mode is as follows:
w2 a2 w2 a2 w2 b2 w2 b2
w2 a2 w2 a2 w2 b2 w2 b2
w2 b2 w2 b2 w2 c2 w2 c2
w2 b2 w2 b2 w2 c2 w2 c2
where w2 denotes the first panchromatic pixel and a2, b2 and c2 each denote the first single-color pixel.
In one embodiment, after obtaining the single-color photosensitive data corresponding to the single-color pixel point and the panchromatic photosensitive data corresponding to the panchromatic pixel point, the method further includes: in response to the second-level merging instruction, for each pixel point subunit, merging the single-color photosensitive data obtained by the pixel point subunit to obtain a second single-color pixel, and merging the panchromatic photosensitive data obtained by the pixel point subunit to obtain a second panchromatic pixel; a second target image is generated based on the second single-color pixels and the second panchromatic pixels.
The second-level merging instruction is an instruction corresponding to merging the single-color photosensitive data obtained by the pixel point sub-unit and merging the full-color photosensitive data obtained by the pixel point sub-unit.
The second single-color pixel is obtained by combining the single-color photosensitive data obtained by the pixel point sub-unit. The second panchromatic pixel is a pixel obtained by combining the panchromatic sensitization data obtained by the pixel point sub-unit. The same pixel point subunit includes a plurality of single-color pixel points and a plurality of panchromatic pixel points, and thus the same pixel point subunit can obtain a second single-color pixel and a second panchromatic pixel.
The second target image is an image generated based on the second single-color pixels and the second panchromatic pixels.
In one embodiment, the electronics combine each second single-color pixel and each second panchromatic pixel to generate a second target image.
In another embodiment, the electronic device performs filter processing on each of the second single-color pixels and each of the second panchromatic pixels, and combines the filter-processed each of the second single-color pixels and each of the second panchromatic pixels to generate the second target image. The electronic device performs filtering processing on each second single-color pixel and each second panchromatic pixel, so that noise in each second single-color pixel and each second panchromatic pixel can be filtered out, and the second target image can be generated more accurately.
In another embodiment, the electronics can also process the designated second single-color pixel or the designated second panchromatic pixel to generate a second target image. The processing may include filter processing, deletion processing, interpolation processing, pixel value adjustment processing, or the like.
In other embodiments, the electronic device may also generate the second target image in other manners, which is not limited herein.
In this embodiment, the electronic device exposes each pixel point in the pixel point array to obtain single-color photosensitive data and panchromatic photosensitive data, and then respectively merges the single-color photosensitive data and the panchromatic photosensitive data to obtain a second single-color pixel and a second panchromatic pixel, where the second panchromatic pixel has a higher light-entering amount, so that a higher light-entering amount can be obtained based on the second color pixel and the second panchromatic pixel, and the overall brightness of the image is improved, thereby improving the image quality signal-to-noise ratio of the image and more accurately generating a second target image.
In the process of pixel combination, the single-color photosensitive data obtained by the pixel point sub-unit of the electronic device are combined to obtain a second single-color pixel, the full-color photosensitive data obtained by the pixel point sub-unit are combined to obtain a second full-color pixel, and each second single-color pixel or each second full-color pixel is obtained by combining the single-color photosensitive data or the full-color photosensitive data in the pixel point sub-unit, so that the target image can be output more quickly, and the sensitivity of image output is improved.
In one embodiment, the image sensor further comprises a color filter array, the color filter array comprises minimum color filter repeating units, each minimum color filter repeating unit comprises a plurality of color filter subunits, each color filter subunit comprises a plurality of single-color filters and a plurality of panchromatic color filters, each color filter in the color filter array corresponds to each pixel point in the pixel point array in a one-to-one mode, and light rays filtered by the color filters are projected to the corresponding pixel points to obtain photosensitive data; the minimum color filter repeating unit is 8 rows, 8 columns and 64 color filters, and the arrangement mode is as follows:
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
wherein w1 denotes a full-color filter, and a1, b1 and c1 each denote a single-color filter;
merging the photosensitive data obtained by exposure of the repeated unit of each minimum pixel point to obtain a second pixel area in a second target image; the second pixel area is 2 rows, 4 columns and 8 pixels, and the arrangement mode is as follows:
w3 a3 w3 b3
w3 b3 w3 c3
where w3 denotes a second full-color pixel and a3, b3 and c3 each denote a second single-color pixel.
FIG. 6 is a diagram illustrating a two-stage merge approach in one embodiment. The electronic device responds to a two-stage combination instruction, wherein 602 is one of the pixel point sub-units, for the pixel point sub-unit 602, the 8 single-color photosensitive data obtained by the pixel point sub-unit 602 are combined to obtain a second single-color pixel, and the 8 panchromatic photosensitive data obtained by the pixel point sub-unit 602 are combined to obtain a second panchromatic pixel; other pixel dot sub-units are also merged in this manner, and a second target image 604 is generated based on the second single-color pixels and the second panchromatic pixels.
In one embodiment, the merging of the single-color photosensitive data obtained by the pixel point sub-unit to obtain a second single-color pixel, and the merging of the full-color photosensitive data obtained by the pixel point sub-unit to obtain a second full-color pixel includes: adding or averaging the single-color photosensitive data obtained by the pixel point subunit to obtain a second single-color pixel; and adding or averaging all the panchromatic photosensitive data obtained by the pixel point sub-unit to obtain a second panchromatic pixel.
For example, the electronic device obtains 8 single-color exposure data from pixel point subunits in the pixel point array, and adds the 8 single-color exposure data to obtain a second single-color pixel.
For another example, the electronic device obtains 8 panchromatic exposure data from pixel dot subunits in the pixel dot array, and adds the 8 panchromatic exposure data to obtain a second panchromatic pixel.
In another embodiment, the electronic device may further perform weighted average on each single-color photosensitive data obtained by the pixel point sub-unit to obtain a second single-color pixel; and carrying out weighted average on all panchromatic photosensitive data obtained by the pixel point sub-unit to obtain a second panchromatic pixel.
In one embodiment, after generating the second target image, the method further includes: generating a second panchromatic channel image and a plurality of second single-color channel images according to the second target image, and sequentially determining pixels required by the current position from the Bayer array image to be generated; determining a target single-color channel map from each second single-color channel map based on pixels required by the current position, and extracting the second single-color pixels from the corresponding positions of the target single-color channel map as pixels of the current position in the Bayer array image to be generated until pixels of all positions in the Bayer array image to be generated are generated to obtain the Bayer array image; and the pixels in the second panchromatic channel image are all second panchromatic pixels, and the pixels in each second single-color channel image are all same kind of second single-color pixels.
The pixels in the second panchromatic channel map are all second panchromatic pixels, and the pixels in each second single-color channel map are all same kind of second single-color pixels. For example, the second panchromatic channel map is a W (White) channel map, and the pixels in the W channel map are all White pixels. The second single color channel map is an R (Red) channel map, and the pixels in the R channel map are all Red pixels.
In one embodiment, the electronics can extract the same type of pixels from the second target image, respectively, to generate a second panchromatic channel map and a plurality of second single-color channel maps. In another embodiment, the electronics can split the second target image to obtain a second panchromatic channel map and a plurality of second single-color channel maps. In another embodiment, the electronics can also interpolate a second panchromatic channel map and a second single-color channel map corresponding to the same pixel based on individual pixels in the second target image. In other embodiments, the electronic device may generate the second panchromatic channel map and the plurality of second single-color channel maps in other manners, which are not limited herein.
For example, the second target image is an RGBW image, and the electronic device may generate an R-channel map, a G-channel map, a B-channel map, and a W-channel map based on the second target image. The R-channel image comprises R pixels, the G-channel image comprises G pixels, the B-channel image comprises B pixels, and the W-channel image comprises W pixels.
The target single color channel map is a single color channel map that is consistent with the channel map to which the desired pixel for the current position belongs.
For example, if the pixel required for the current position (3,5) in the bayer array image to be generated is a G pixel, the target single-color channel map is a G channel map, and the G pixel is extracted from (3,5) of the G channel map as the pixel of the current position (3,5) in the bayer array image to be generated.
For another example, if the pixel required for the current position (100,112) in the bayer array image to be generated is an R pixel, the target single-color channel map is an R channel map, and the R pixel is extracted from (100,112) of the R channel map as the pixel of the current position (100,112) in the bayer array image to be generated.
In the embodiment, pixels required by the current position are sequentially determined from a Bayer array image to be generated; and determining a target single-color channel map from the second single-color channel maps based on the pixels required by the current position, and extracting the second single-color pixels from the corresponding positions of the target single-color channel map as the pixels of the current position in the Bayer array image to be generated until the pixels of all positions in the Bayer array image to be generated are generated, so that the Bayer array image can be accurately generated.
In another embodiment, after the generating the second target image, the method further includes: generating a second panchromatic channel image and a plurality of second single-color channel images according to the second target image, and combining the second single-color channel images to generate a Bayer array image; and the pixels in the second panchromatic channel image are all second panchromatic pixels, and the pixels in each second single-color channel image are all same kind of second single-color pixels.
For example, if the second target image is an RGBW image, the electronic device may generate an R-channel image, a G-channel image, a B-channel image, and a W-channel image based on the second target image, and then combine the R-channel image, the G-channel image, and the B-channel image to generate a bayer array image, that is, an RGB image.
In another embodiment, there is provided another image generation method including: exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to the single-color pixel point and panchromatic photosensitive data corresponding to the panchromatic pixel point; taking each single color photosensitive data as a third single color pixel and taking each panchromatic photosensitive data as a third panchromatic pixel; a third target image is generated based on the third single-color pixels and the third panchromatic pixels.
And each single-color exposure data is used as a third single-color pixel, each full-color exposure data is used as a third full-color pixel, and a third target image is generated based on the third single-color pixels and the third full-color pixels, wherein the third target image is an image in a full-output mode (fullsize).
FIG. 7 is a diagram illustrating a full output mode in one embodiment. The image sensor controls the pixel point array to read out the photosensitive data in a full output mode (fullsize) according to the sequence of row by row or column by column, and generates a third target image.
In one embodiment, after generating the first target image, the method further includes: identifying attributes of the image sensor; acquiring a corresponding driving module identifier and an adjusting parameter based on the attribute of the image sensor; and calling the driving module corresponding to the driving module identification to transmit the adjustment parameter to the processor, and adjusting the first target image by adopting the adjustment parameter through the processor to obtain a final output image.
It is understood that the attribute of the image sensor may specifically be a front or rear camera, a main or sub camera, or the like. Different image sensors have different attributes, and thus different sensors correspond to different driving modules and adjustment parameters. The electronic device may invoke the CSID module to identify attributes of the image sensor.
The driving module is a module that enables the image sensor and the processor to communicate with each other. The adjustment parameter is a parameter for adjusting the image. The adjustment parameters at least include a black level (OB) parameter, a Lens Shading Correction (LSC) parameter, a dead pixel compensation (BPC) parameter, a Demosaic (DM) parameter, a Color Correction (CC) parameter, a Global tone mapping (GTM, Global tone mapping) parameter, and a Color conversion parameter.
The processor may be a Central Processing Unit (CPU), an Image processor (ISP), or another processor for Processing an Image.
Specifically, a driving module identifier and an adjustment parameter corresponding to the attribute of the image sensor are stored in a memory of the electronic device, and after the electronic device identifies the attribute of the image sensor, the corresponding driving module identifier and the adjustment parameter can be acquired from the memory; and calling the driving module corresponding to the driving module identifier, sending the adjustment parameter to the processor through the driving module, and adjusting the first target image by adopting the adjustment parameter through the processor to obtain a final output image. The adjusting parameters are adopted by the processor to adjust the first target image, and specifically, the adjusting parameters are adopted by the processor to sequentially perform black level, lens shading correction, dead pixel compensation, demosaicing, color correction, global tone mapping and color conversion on the first target image.
Further, the first target image is adjusted by the processor through the adjustment parameters to obtain an adjusted first target image, and color space conversion is performed on the adjusted first target image to obtain an image of the specified color space.
The specific color space may specifically be YUV or HSI (Hue, Saturation, Brightness), and the like. Y in YUV represents brightness (Luma) and is a gray scale value, and U and V represent Chroma (Chroma) and are used for describing image color and saturation for specifying the color of a pixel.
In this embodiment, if the attribute of the image sensor is identified, the corresponding driving module identifier and the adjustment parameter may be obtained based on the attribute of the image sensor, and then the driving module corresponding to the driving module identifier is called to transmit the adjustment parameter to the processor, and the processor adjusts the first target image by using the adjustment parameter, so that the final output image may be accurately obtained.
In other embodiments, the first target image may be replaced by a second target image and a third target image.
In one embodiment, as shown in fig. 8, the electronic device generates a first target image through an image sensor, inputs the first target image into the CSID module, and identifies an attribute of the image sensor; the method comprises the steps of obtaining a driving module identification and an adjustment parameter corresponding to the attribute of an image sensor based on the attribute of the image sensor, calling the driving module to transmit the adjustment parameter to an image signal processor, sequentially carrying out black level, lens shading correction, dead pixel compensation and demosaicing on a first target image by the image signal processor through the adjustment parameter to obtain an intermediate image, sequentially carrying out color correction, global tone mapping and color conversion on the intermediate image to obtain a processed image, then carrying out color space conversion on the processed image, and converting the processed image into a YUV image. Wherein the image sensor may be an RGBW sensor, the first target image is an RGBW image.
The image sensor can be customized according to a first target image of the RGBW, and processing algorithms of black level, lens shading correction, dead pixel compensation and demosaicing can also be customized according to the first target image of the RGBW, and can compatibly process the first target image in a first-stage combination mode, a second target image in a second-stage combination mode and a third target image in a full-output mode.
In one embodiment, before invoking the driving module corresponding to the driving module identifier to transmit the adjustment parameter to the processor, and adjusting the first target image by using the adjustment parameter through the processor, to obtain a final output image, the method further includes: converting the first target image into a Bayer array image; calling a driving module corresponding to the driving module identification, and adjusting the first target image by adopting the adjustment parameters to obtain a final output image, wherein the method comprises the following steps: and calling the driving module corresponding to the driving module identification to transmit the adjustment parameters to the processor, and adjusting the Bayer array image by adopting the adjustment parameters through the processor to obtain a final output image.
Converting the first target image into a Bayer array image, and specifically, sequentially determining pixels required by the current position from the Bayer array image to be generated; generating a first panchromatic channel image and a plurality of first single-color channel images according to a first target image, determining a target single-color channel image from each first single-color channel image based on pixels required by a current position, extracting the first single-color pixels from corresponding positions of the target single-color channel image as pixels of the current position in a Bayer array image to be generated until pixels of all positions in the Bayer array image to be generated are generated, and obtaining the Bayer array image; the pixels in the first panchromatic channel image are all first panchromatic pixels, and the pixels in each first single-color channel image are all same type of first single-color pixels.
In another embodiment, the electronic device may also directly combine the first single-color channel maps to generate a bayer array image.
In other embodiments, the first target image may be replaced by a second target image and a third target image.
The electronic equipment inputs at least one of the first target image, the second target image and the third target image into the conversion module, and the conversion module converts the at least one of the first target image, the second target image and the third target image into the Bayer array image. The conversion module comprises three algorithms, namely an algorithm for converting the first target image into a Bayer array image, an algorithm for converting the second target image into the Bayer array image and an algorithm for converting the third target image into the Bayer array image. In one embodiment, the three algorithms may each be integrated into the translation module. In another embodiment, the translation module includes three separate sub-modules that sequentially store one of the three algorithms. Alternatively, the three algorithms may be software solutions or may be made into hardware modules. The software scheme has the advantages of high flexibility, and the hardware module has high processing speed, so that the video real-time preview can be realized.
And if the color space of the Bayer array image is RGB, sequentially performing black level, lens shading correction, dead pixel compensation, demosaicing, color correction, global tone mapping and color conversion on the RGB Bayer array image to obtain a final output image, wherein the color space of the final output image is also RGB. The electronic device performs color space conversion on the final output image, and may convert the RGB processed image into an image of a specified color space.
In one embodiment, as shown in fig. 9, the electronic device generates a first target image through the image sensor, inputs the first target image into the conversion module, and converts the first target image into a bayer array image through the conversion module. Wherein the image sensor may be an RGBW sensor, the first target image is an RGBW image. The electronic equipment inputs a Bayer array image into the CSID module, and identifies the attribute of the image sensor; the method comprises the steps of obtaining a driving module identification and an adjustment parameter corresponding to the attribute of an image sensor based on the attribute of the image sensor, calling the driving module to transmit the adjustment parameter to an image signal processor, sequentially carrying out black level, lens shading correction, dead pixel compensation and demosaicing on a Bayer array image by the aid of the adjustment parameter through the image signal processor to obtain an intermediate image, sequentially carrying out color correction, global tone mapping and color conversion on the intermediate image to obtain a processed image, carrying out color space conversion on the processed image, and converting the processed image into a YUV image.
In one embodiment, after generating the first target image, the method further includes: generating a first panchromatic channel map and a Bayer array image from the first target image; respectively and sequentially carrying out black level, lens shading correction, dead pixel compensation and demosaicing on the first panchromatic channel image and the Bayer array image to obtain a processed first panchromatic channel image and a processed Bayer array image; fusing the processed first panchromatic channel image and the processed Bayer array image to obtain a fused image; and carrying out color correction, global tone mapping and color conversion on the fused image in sequence to obtain a final output image.
Generating a first panchromatic channel image and a Bayer array image according to a first target image, and particularly splitting the target image into the first panchromatic channel image and a QuadBayer (QuadBayer) image; the quad bayer image is converted to a bayer array image. The electronic device inputs a four-bayer image into the Remosaic module, and the four-bayer image can be converted into a bayer array image.
The fused image is obtained by fusing the processed first panchromatic channel image and the processed bayer array image. In this embodiment, the electronic device fuses the processed first panchromatic channel map and the processed bayer array image by using a Fusion algorithm, so as to obtain a fused image. The processed Bayer array image is an RGB image, the fused image is also an RGB image, a panchromatic channel image with higher light incoming quantity is fused to obtain the fused image, and the definition and the signal-to-noise ratio of the RGB fused image can be improved.
Furthermore, the electronic device performs color correction, global tone mapping and color conversion on the fused image in sequence to obtain a processed fused image, and may further perform color space conversion on the processed fused image to obtain an image in a specified color space.
The specific color space may specifically be YUV or HSI (Hue, Saturation, Brightness), and the like. Y in YUV represents brightness (Luma) and is a gray scale value, and U and V represent Chroma (Chroma) and are used for describing image color and saturation for specifying the color of a pixel.
In other embodiments, the first target image may be replaced by a second target image.
In one embodiment, as shown in fig. 10, an electronic device generates a first target image through an image sensor, inputs the first target image into a DDR (Double Data Rate) module, splits the first target image into a first panchromatic channel image and a quad bayer image through the DDR module, and inputs the quad bayer image into a Remosaic module, so that the quad bayer image can be converted into a bayer array image. The electronic equipment respectively inputs the first panchromatic channel image and the Bayer array image into an image signal processor, the image signal processor respectively carries out black level, lens shading correction, dead pixel compensation and demosaicing on the first panchromatic channel image and the Bayer array image in sequence, then the processed first panchromatic channel image and the processed Bayer array image are input into a DDR module, and the DDR module calls a Fusion algorithm in a Fusion module to fuse the processed first panchromatic channel image and the processed Bayer array image to obtain a fused image; carrying out color correction, global tone mapping and color conversion on the fused image in sequence to obtain a processed fused image; and performing color space conversion on the processed fusion image to convert the fusion image into a YUV image.
It should be understood that, although the steps in the flowcharts of fig. 4, 8 to 10 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 4, 8-10 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
Fig. 11 is a block diagram showing the configuration of an image generating apparatus according to an embodiment. As shown in fig. 11, there is provided an image generating apparatus applied to an electronic device including an image sensor, the image sensor including a pixel point array, the pixel point array including minimum pixel point repeating units, each of the minimum pixel point repeating units including a plurality of pixel point sub-units, each of the pixel point sub-units including a plurality of single-color pixel points and a plurality of panchromatic pixel points, the single-color pixel points and the panchromatic pixel points in the pixel point sub-units being alternately arranged in both a row direction and a column direction; the image generation apparatus includes: an exposure module 1102, a merging module 1104, and an image generation module 1106, wherein:
the exposure module 1102 is configured to expose each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to the single-color pixel point and panchromatic photosensitive data corresponding to the panchromatic pixel point.
A merging module 1104, configured to respond to a first-stage merging instruction, merge two single-color photosensitive data obtained in a first diagonal direction in a pixel dot array to obtain a first single-color pixel, and merge two panchromatic photosensitive data obtained in a second diagonal direction in the pixel dot array to obtain a first panchromatic pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other.
An image generation module 1106 is configured to generate a first target image based on the first single-color pixels and the first panchromatic pixels.
The image generating device comprises an image sensor, the image sensor comprises a pixel point array, the pixel point array comprises a minimum pixel point repeating unit, each minimum pixel point repeating unit comprises a plurality of pixel point sub-units, each pixel point sub-unit comprises a plurality of single-color pixel points and a plurality of panchromatic pixel points, the single-color pixel points and the panchromatic pixel points in the pixel point sub-units are arranged alternately in the row direction and the column direction, and then higher light incoming quantity can be received through the panchromatic pixel points. The electronic equipment exposes each pixel point in the pixel point array to obtain single-color photosensitive data and panchromatic photosensitive data, and then the single-color photosensitive data and the panchromatic photosensitive data are respectively combined to obtain a first single-color pixel and a first panchromatic pixel, wherein the first panchromatic pixel has a higher light inlet quantity, so that the higher light inlet quantity can be obtained based on the first color pixel and the first panchromatic pixel, the overall brightness of an image is improved, the image quality signal to noise ratio of the image is improved, and a first target image is generated more accurately.
In the process of pixel combination, the electronic device combines two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and combines two panchromatic photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first panchromatic pixel, wherein each pixel is obtained by combining the two photosensitive data, and meanwhile, the target image can be output more quickly, the sensitivity of image output is improved, and the image information and the output speed of the target image are taken into account.
In one embodiment, the merging module 1104 is further configured to add or average two single-color sensitive data obtained from a first diagonal direction in the pixel point array to obtain a first single-color pixel; and adding or averaging the two panchromatic photosensitive data obtained in the second diagonal direction in the pixel point array to obtain a first panchromatic pixel.
In one embodiment, the image generation module 1106 is further configured to sequentially determine pixels required for a current position from a bayer array image to be generated; generating a first panchromatic channel image and a plurality of first single-color channel images according to a first target image, determining a target single-color channel image from each first single-color channel image based on pixels required by a current position, extracting the first single-color pixels from corresponding positions of the target single-color channel image as pixels of the current position in a Bayer array image to be generated until pixels of all positions in the Bayer array image to be generated are generated, and obtaining the Bayer array image; the pixels in the first panchromatic channel image are all first panchromatic pixels, and the pixels in each first single-color channel image are all same type of first single-color pixels.
In an embodiment, the merging module 1104 is further configured to, in response to the second-level merging instruction, merge, for each pixel point sub-unit, the single-color photosensitive data obtained by the pixel point sub-unit to obtain a second single-color pixel, and merge the panchromatic photosensitive data obtained by the pixel point sub-unit to obtain a second panchromatic pixel; the image generation module 1106 is further configured to generate a second target image based on the second single-color pixels and the second panchromatic pixels.
In an embodiment, the merging module 1104 is further configured to add or average the single-color photosensitive data obtained by the pixel point sub-unit to obtain a second single-color pixel; and adding or averaging all the panchromatic photosensitive data obtained by the pixel point sub-unit to obtain a second panchromatic pixel.
In one embodiment, the image sensor further comprises a color filter array, the color filter array comprises minimum color filter repeating units, each minimum color filter repeating unit comprises a plurality of color filter subunits, each color filter subunit comprises a plurality of single-color filters and a plurality of panchromatic color filters, each color filter in the color filter array corresponds to each pixel point in the pixel point array in a one-to-one mode, and light rays filtered by the color filters are projected to the corresponding pixel points to obtain photosensitive data; the minimum color filter repeating unit is 8 rows, 8 columns and 64 color filters, and the arrangement mode is as follows:
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
wherein w1 denotes a full-color filter, and a1, b1 and c1 each denote a single-color filter;
merging the photosensitive data obtained by exposure of the repeated unit of each minimum pixel point to obtain a second pixel area in a second target image; the second pixel area is 2 rows, 4 columns and 8 pixels, and the arrangement mode is as follows:
w3 a3 w3 b3
w3 b3 w3 c3
where w3 denotes a second full-color pixel and a3, b3 and c3 each denote a second single-color pixel.
In one embodiment, the image sensor further comprises a color filter array, the color filter array comprises minimum color filter repeating units, each minimum color filter repeating unit comprises a plurality of color filter subunits, each color filter subunit comprises a plurality of single-color filters and a plurality of panchromatic color filters, each color filter in the color filter array corresponds to each pixel point in the pixel point array in a one-to-one mode, and light rays filtered by the color filters are projected to the corresponding pixel points to obtain photosensitive data; the minimum color filter repeating unit is 8 rows, 8 columns and 64 color filters, and the arrangement mode is as follows:
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
a1 w1 a1 w1 b1 w1 b1 w1
w1 a1 w1 a1 w1 b1 w1 b1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
b1 w1 b1 w1 c1 w1 c1 w1
w1 b1 w1 b1 w1 c1 w1 c1
where w1 denotes a full-color filter, and a1, b1, and c1 each denote a single-color filter.
In one embodiment, photosensitive data obtained by exposure of each minimum pixel point repeating unit are combined to obtain a first pixel area in a first target image; the first pixel area is 4 rows, 8 columns and 16 pixels, and the arrangement mode is as follows:
w2 a2 w2 a2 w2 b2 w2 b2
w2 a2 w2 a2 w2 b2 w2 b2
w2 b2 w2 b2 w2 c2 w2 c2
w2 b2 w2 b2 w2 c2 w2 c2
where w2 denotes the first panchromatic pixel and a2, b2 and c2 each denote the first single-color pixel.
In one embodiment, the apparatus further comprises an adjustment module for identifying an attribute of the image sensor; acquiring a corresponding driving module identifier and an adjusting parameter based on the attribute of the image sensor; and calling the driving module corresponding to the driving module identification to transmit the adjustment parameter to the processor, and adjusting the first target image by adopting the adjustment parameter through the processor to obtain a final output image.
In one embodiment, the adjusting module is further configured to convert the first target image into a bayer array image; and calling the driving module corresponding to the driving module identification to transmit the adjustment parameters to the processor, and adjusting the Bayer array image by adopting the adjustment parameters through the processor to obtain a final output image.
In one embodiment, the adjusting module is further configured to generate a first panchromatic channel map and a bayer array image from the first target image; respectively and sequentially carrying out black level, lens shading correction, dead pixel compensation and demosaicing on the first panchromatic channel image and the Bayer array image to obtain a processed first panchromatic channel image and a processed Bayer array image; fusing the processed first panchromatic channel image and the processed Bayer array image to obtain a fused image; and carrying out color correction, global tone mapping and color conversion on the fused image in sequence to obtain a final output image.
The division of the modules in the image generating apparatus is merely for illustration, and in other embodiments, the image generating apparatus may be divided into different modules as needed to complete all or part of the functions of the image generating apparatus.
For specific limitations of the image generation apparatus, reference may be made to the above limitations of the image generation method, which are not described herein again. The respective modules in the image generating apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 12 is a schematic diagram of an internal structure of an electronic device in one embodiment. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and a wearable device. The electronic device includes a processor and a memory connected by a system bus. The processor may include one or more processing units, among others. The processor may be a CPU (Central Processing Unit), a DSP (Digital Signal processor), or the like. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing an image generation method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium.
The implementation of each module in the image generation apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. Program modules constituted by such computer programs may be stored on the memory of the electronic device. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image generation method.
Embodiments of the present application also provide a computer program product containing instructions which, when run on a computer, cause the computer to perform an image generation method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. The nonvolatile Memory may include a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory), an EPROM (Erasable Programmable Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), or a flash Memory. Volatile Memory can include RAM (Random Access Memory), which acts as external cache Memory. By way of illustration and not limitation, RAM is available in many forms, such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), SDRAM (Synchronous Dynamic Random Access Memory), Double Data Rate DDR SDRAM (Double Data Rate Synchronous Random Access Memory), ESDRAM (Enhanced Synchronous Dynamic Random Access Memory), SLDRAM (Synchronous Link Dynamic Random Access Memory), RDRAM (Random Dynamic Random Access Memory), and DRmb DRAM (Dynamic Random Access Memory).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (14)

1. An image generation method applied to an electronic device including an image sensor, the image sensor including a pixel point array, the pixel point array including minimum pixel point repeating units, each of the minimum pixel point repeating units including a plurality of pixel point sub-units, each of the pixel point sub-units including a plurality of single-color pixel points and a plurality of panchromatic pixel points, the single-color pixel points and the panchromatic pixel points in the pixel point sub-units being alternately arranged in both a row direction and a column direction; the method comprises the following steps:
exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to the single-color pixel points and panchromatic photosensitive data corresponding to the panchromatic pixel points;
responding to a first-stage merging instruction, merging two single-color photosensitive data obtained in a first diagonal direction in the pixel point array to obtain a first single-color pixel, and merging two panchromatic photosensitive data obtained in a second diagonal direction in the pixel point array to obtain a first panchromatic pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other;
a first target image is generated based on each of the first single-color pixels and each of the first panchromatic pixels.
2. The method of claim 1, wherein said combining two single-color exposure data from a first diagonal orientation in said pixel spot array to obtain a first single-color pixel and combining two panchromatic exposure data from a second diagonal orientation in said pixel spot array to obtain a first panchromatic pixel comprises:
adding or averaging two single-color photosensitive data obtained in the first diagonal direction in the pixel point array to obtain a first single-color pixel;
and adding or averaging the two panchromatic photosensitive data obtained in the second diagonal direction in the pixel point array to obtain a first panchromatic pixel.
3. The method of claim 1, wherein after generating the first target image, further comprising:
determining pixels required by the current position from a Bayer array image to be generated in sequence;
generating a first panchromatic channel image and a plurality of first single-color channel images according to the first target image, determining a target single-color channel image from each first single-color channel image based on pixels required by the current position, and extracting the first single-color pixels from the corresponding positions of the target single-color channel images as pixels of the current position in the Bayer array image to be generated until the pixels of all positions in the Bayer array image to be generated are generated to obtain the Bayer array image; wherein the pixels in the first panchromatic channel map are all first panchromatic pixels and the pixels in each of the first single-color channel maps are all same kind of first single-color pixels.
4. The method of any one of claims 1 to 3, wherein the image sensor further comprises a color filter array, the color filter array comprises a minimum color filter repeating unit, each minimum color filter repeating unit comprises a plurality of color filter subunits, each color filter subunit comprises a plurality of single-color filters and a plurality of panchromatic color filters, each color filter in the color filter array corresponds to each pixel point in the pixel point array in a one-to-one mode, and the light filtered by the color filter is projected to the corresponding pixel point to obtain sensitization data;
the minimum color filter repeating unit is 64 color filters in 8 rows and 8 columns, and the arrangement mode is as follows:
Figure FDA0003213263890000011
where w1 denotes a full-color filter, and a1, b1, and c1 each denote a single-color filter.
5. The method according to claim 4, wherein the exposure of the minimum pixel repetition unit is combined to obtain a first pixel region in the first target image;
the first pixel area is 4 rows, 8 columns and 16 pixels, and the arrangement mode is as follows:
Figure FDA0003213263890000012
Figure FDA0003213263890000021
where w2 denotes the first panchromatic pixel and a2, b2 and c2 each denote the first single-color pixel.
6. The method of claim 1, wherein after obtaining the single-color exposure data corresponding to the single-color pixel and the panchromatic exposure data corresponding to the panchromatic pixel, the method further comprises:
in response to a two-stage merging instruction, for each pixel point subunit, merging the single-color photosensitive data obtained by the pixel point subunit to obtain a second single-color pixel, and merging the panchromatic photosensitive data obtained by the pixel point subunit to obtain a second panchromatic pixel;
a second target image is generated based on each of the second single-color pixels and each of the second panchromatic pixels.
7. The method of claim 6, wherein combining the single color exposure data from the pixel dot sub-unit to obtain a second single color pixel and combining the panchromatic exposure data from the pixel dot sub-unit to obtain a second panchromatic pixel comprises:
adding or averaging the single-color photosensitive data obtained by the pixel point subunit to obtain a second single-color pixel;
and adding or averaging all the panchromatic photosensitive data obtained by the pixel point sub-unit to obtain a second panchromatic pixel.
8. The method of claim 6, wherein the image sensor further comprises a color filter array comprising minimum color filter repeating units, each of the minimum color filter repeating units comprises a plurality of color filter subunits, each of the color filter subunits comprises a plurality of single-color filters and a plurality of panchromatic filters, each color filter in the color filter array corresponds to each pixel point in the pixel point array in a one-to-one manner, and the light filtered by the color filter is projected to the corresponding pixel point to obtain the photosensitive data;
the minimum color filter repeating unit is 64 color filters in 8 rows and 8 columns, and the arrangement mode is as follows:
Figure FDA0003213263890000022
wherein w1 denotes a full-color filter, and a1, b1 and c1 each denote a single-color filter;
merging the photosensitive data obtained by exposure of the minimum pixel point repeating unit to obtain a second pixel area in the second target image; the second pixel area is 8 pixels with 2 rows and 4 columns, and the arrangement mode is as follows:
Figure FDA0003213263890000023
where w3 denotes a second full-color pixel and a3, b3 and c3 each denote a second single-color pixel.
9. The method of claim 1, wherein after generating the first target image, further comprising:
identifying attributes of the image sensor;
acquiring a corresponding driving module identifier and an adjusting parameter based on the attribute of the image sensor;
and calling the driving module corresponding to the driving module identification to transmit the adjustment parameter to a processor, and adjusting the first target image by adopting the adjustment parameter through the processor to obtain a final output image.
10. The method according to claim 9, wherein before the invoking of the driver module corresponding to the driver module identifier transmits the adjustment parameter to the processor, and the processor adjusts the first target image by using the adjustment parameter to obtain a final output image, the method further comprises:
converting the first target image into a bayer array image;
the calling the driving module corresponding to the driving module identifier, and adjusting the first target image by using the adjustment parameter to obtain a final output image, including:
and calling the driving module corresponding to the driving module identification to transmit the adjustment parameters to a processor, and adjusting the Bayer array image by using the adjustment parameters through the processor to obtain a final output image.
11. The method of claim 1, wherein after generating the first target image, further comprising:
generating a first panchromatic channel map and a Bayer array image from the first target image;
respectively and sequentially carrying out black level, lens shading correction, dead pixel compensation and demosaicing on the first panchromatic channel image and the Bayer array image to obtain a processed first panchromatic channel image and a processed Bayer array image;
fusing the processed first panchromatic channel image and the processed Bayer array image to obtain a fused image;
and carrying out color correction, global tone mapping and color conversion on the fused image in sequence to obtain a final output image.
12. An image generating apparatus applied to an electronic device including an image sensor, the image sensor including a pixel dot array including minimum pixel dot repeating units, each of the minimum pixel dot repeating units including a plurality of pixel dot sub-units, each of the pixel dot sub-units including a plurality of single-color pixel dots and a plurality of panchromatic pixel dots, the single-color pixel dots and the panchromatic pixel dots in the pixel dot sub-units being alternately arranged in both a row direction and a column direction; the device comprises:
the exposure module is used for exposing each pixel point in the pixel point array to obtain single-color photosensitive data corresponding to the single-color pixel point and panchromatic photosensitive data corresponding to the panchromatic pixel point;
the merging module is used for responding to a first-stage merging instruction, merging the two single-color photosensitive data obtained in the first diagonal direction in the pixel point array to obtain a first single-color pixel, and merging the two panchromatic photosensitive data obtained in the second diagonal direction in the pixel point array to obtain a first panchromatic pixel; wherein the first diagonal direction and the second diagonal direction are perpendicular to each other;
an image generation module to generate a first target image based on each of the first single-color pixels and each of the first panchromatic pixels.
13. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the image generation method according to any of claims 1 to 11.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
CN202110937243.8A 2021-08-16 2021-08-16 Image generation method, device, electronic equipment and computer readable storage medium Active CN113676675B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110937243.8A CN113676675B (en) 2021-08-16 2021-08-16 Image generation method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110937243.8A CN113676675B (en) 2021-08-16 2021-08-16 Image generation method, device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113676675A true CN113676675A (en) 2021-11-19
CN113676675B CN113676675B (en) 2023-08-15

Family

ID=78542975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110937243.8A Active CN113676675B (en) 2021-08-16 2021-08-16 Image generation method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113676675B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040084A (en) * 2021-12-01 2022-02-11 Oppo广东移动通信有限公司 Image sensor, camera module, electronic equipment, image generation method and device
CN114693580A (en) * 2022-05-31 2022-07-01 荣耀终端有限公司 Image processing method and related device
CN115442573A (en) * 2022-08-23 2022-12-06 深圳市汇顶科技股份有限公司 Image processing method and device and electronic equipment
CN115696063A (en) * 2022-09-13 2023-02-03 荣耀终端有限公司 Photographing method and electronic equipment
CN115866422A (en) * 2022-11-24 2023-03-28 威海华菱光电股份有限公司 Pixel data determination method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101233762A (en) * 2005-07-28 2008-07-30 伊斯曼柯达公司 Image sensor with improved lightsensitivity
CN105578080A (en) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 Imaging method, image sensor, imaging device and electronic device
CN111586323A (en) * 2020-05-07 2020-08-25 Oppo广东移动通信有限公司 Image sensor, control method, camera assembly and mobile terminal
CN112118378A (en) * 2020-10-09 2020-12-22 Oppo广东移动通信有限公司 Image acquisition method and device, terminal and computer readable storage medium
CN112261391A (en) * 2020-10-26 2021-01-22 Oppo广东移动通信有限公司 Image processing method, camera assembly and mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101233762A (en) * 2005-07-28 2008-07-30 伊斯曼柯达公司 Image sensor with improved lightsensitivity
CN105578080A (en) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 Imaging method, image sensor, imaging device and electronic device
CN111586323A (en) * 2020-05-07 2020-08-25 Oppo广东移动通信有限公司 Image sensor, control method, camera assembly and mobile terminal
CN112118378A (en) * 2020-10-09 2020-12-22 Oppo广东移动通信有限公司 Image acquisition method and device, terminal and computer readable storage medium
CN112261391A (en) * 2020-10-26 2021-01-22 Oppo广东移动通信有限公司 Image processing method, camera assembly and mobile terminal

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040084A (en) * 2021-12-01 2022-02-11 Oppo广东移动通信有限公司 Image sensor, camera module, electronic equipment, image generation method and device
CN114693580A (en) * 2022-05-31 2022-07-01 荣耀终端有限公司 Image processing method and related device
WO2023231583A1 (en) * 2022-05-31 2023-12-07 荣耀终端有限公司 Image processing method and related device thereof
CN115442573A (en) * 2022-08-23 2022-12-06 深圳市汇顶科技股份有限公司 Image processing method and device and electronic equipment
CN115442573B (en) * 2022-08-23 2024-05-07 深圳市汇顶科技股份有限公司 Image processing method and device and electronic equipment
CN115696063A (en) * 2022-09-13 2023-02-03 荣耀终端有限公司 Photographing method and electronic equipment
CN115866422A (en) * 2022-11-24 2023-03-28 威海华菱光电股份有限公司 Pixel data determination method and device and electronic equipment
WO2024109028A1 (en) * 2022-11-24 2024-05-30 威海华菱光电股份有限公司 Pixel data determination method and apparatus, and electronic device

Also Published As

Publication number Publication date
CN113676675B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
CN113676675B (en) Image generation method, device, electronic equipment and computer readable storage medium
WO2021208593A1 (en) High dynamic range image processing system and method, electronic device, and storage medium
CN213279832U (en) Image sensor, camera and terminal
WO2021179806A1 (en) Image acquisition method, imaging apparatus, electronic device, and readable storage medium
KR102287944B1 (en) Apparatus for outputting image and method thereof
CN113676636B (en) Method and device for generating high dynamic range image, electronic equipment and storage medium
CN111432099A (en) Image sensor, processing system and method, electronic device, and storage medium
WO2021212763A1 (en) High-dynamic-range image processing system and method, electronic device and readable storage medium
CN112118378A (en) Image acquisition method and device, terminal and computer readable storage medium
CN114125242B (en) Image sensor, camera module, electronic device, image generation method and device
CN114422766B (en) Image acquisition equipment
CN113676708B (en) Image generation method, device, electronic equipment and computer readable storage medium
CN113573030B (en) Image generation method, device, electronic equipment and computer readable storage medium
CN113840067A (en) Image sensor, image generation method and device and electronic equipment
WO2023124607A1 (en) Image generation method and apparatus, electronic device, and computer-readable storage medium
CN114040084B (en) Image sensor, camera module, electronic device, image generation method and device
CN114363486B (en) Image sensor, camera module, electronic device, image generation method and device
CN113676635B (en) Method and device for generating high dynamic range image, electronic equipment and storage medium
CN114157795B (en) Image sensor, camera module, electronic device, image generation method and device
CN114125318B (en) Image sensor, camera module, electronic device, image generation method and device
KR102412278B1 (en) Camera module including filter array of complementary colors and electronic device including the camera module
CN112738493B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111970460B (en) High dynamic range image processing system and method, electronic device, and readable storage medium
CN114554046B (en) Image sensor, camera module, electronic device, image generation method and device
JP2022547221A (en) Image capture method, camera assembly and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant