WO2011142282A1 - 撮像装置および画像処理装置 - Google Patents
撮像装置および画像処理装置 Download PDFInfo
- Publication number
- WO2011142282A1 WO2011142282A1 PCT/JP2011/060467 JP2011060467W WO2011142282A1 WO 2011142282 A1 WO2011142282 A1 WO 2011142282A1 JP 2011060467 W JP2011060467 W JP 2011060467W WO 2011142282 A1 WO2011142282 A1 WO 2011142282A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- imaging
- correction coefficient
- unit
- deblurring
- data
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 94
- 238000012545 processing Methods 0.000 title claims abstract description 85
- 238000012937 correction Methods 0.000 claims abstract description 114
- 230000015654 memory Effects 0.000 claims description 28
- 230000006870 function Effects 0.000 claims description 18
- 238000010276 construction Methods 0.000 abstract 1
- 238000000034 method Methods 0.000 description 35
- 230000008569 process Effects 0.000 description 32
- 230000004048 modification Effects 0.000 description 12
- 238000012986 modification Methods 0.000 description 12
- 238000007781 pre-processing Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 238000001514 detection method Methods 0.000 description 7
- 230000007547 defect Effects 0.000 description 5
- 238000000926 separation method Methods 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- 238000012015 optical character recognition Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000003321 amplification Effects 0.000 description 3
- 238000003705 background correction Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 238000003199 nucleic acid amplification method Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000012212 insulator Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
- H04N25/611—Correction of chromatic aberration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
- G06T2207/30208—Marker matrix
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
Definitions
- the present invention relates to an imaging apparatus and an image processing apparatus that are suitably mounted on, for example, a portable electronic device.
- an autofocus lens a lens whose focus can be switched in two steps, or the like is used as an imaging optical system.
- the present invention has been made in view of such problems, and an object of the present invention is to provide an imaging apparatus and an image processing apparatus capable of realizing remote shooting and close-up shooting with a low-cost and simple configuration. .
- An imaging apparatus includes an imaging lens, an imaging element that acquires imaging data based on light rays passing through the imaging lens, and an image processing unit that performs image processing on a captured image based on the imaging data. .
- the image processing unit selects a deblurring unit that performs blur correction on the captured image, and a plurality of blur correction coefficients that are set in accordance with the object distance from the imaging lens to the subject.
- a correction coefficient selection unit for outputting to the processing unit.
- An image processing apparatus includes a deblurring processing unit that performs blur correction on a captured image acquired by an imaging element and based on a passing ray of an imaging lens, and each is set according to an object distance from the imaging lens to a subject.
- a correction coefficient selection unit that selects one of the plurality of blur correction coefficients and outputs the selected one to the deblurring processing unit.
- the correction coefficient selection unit selects one of a plurality of blur correction coefficients set according to the object distance, and the deblurring processing unit selects the selected correction coefficient. Is used to correct the blur of the imaging data.
- the correction coefficient selection unit selects one of a plurality of correction coefficients (for example, correction coefficients for a near object or a far object), for each of imaging data having different object distances (for example, imaging data for a far object and a near object).
- object distances for example, imaging data for a far object and a near object.
- blur correction is performed using one selected from a plurality of blur correction coefficients set according to the object distance.
- Appropriate blur correction can be performed for each of the imaging data having different object distances. Accordingly, it is possible to focus on the far point and the near point without using a special imaging optical system such as an autofocus lens or a focus switching lens. Therefore, it is possible to realize far-distance shooting and close-up shooting with a low-cost and simple configuration.
- FIG. 1 is a functional block diagram illustrating a schematic configuration of an imaging apparatus according to an embodiment of the present invention. It is a circuit diagram which shows an example of the unit pixel in the image pick-up element shown in FIG. It is a schematic diagram which shows an example of the color arrangement
- FIG. 2 is a functional block diagram illustrating a detailed configuration of an image processing unit illustrated in FIG. 1.
- (A) is a state of light ray acquisition at the time of far shooting
- (B) is a far point PSF function
- (C) is a kernel size of a far point correction filter.
- A) shows the state of light ray acquisition during close-up photography
- (B) shows the near point PSF function
- (C) shows the kernel size of the near point correction filter.
- FIG. 1 is a functional block diagram illustrating a schematic configuration of an imaging apparatus according to an embodiment of the present invention. It is a circuit diagram which shows an example of the unit pixel in the image pick-up element shown in FIG
- FIG. 6 is a schematic diagram for explaining a part of a deblurring operation in the image processing unit illustrated in FIG. 1. It shows an example of a captured QR code, where (A) shows before deblurring and (B) shows after deblurring.
- FIG. 10 is a schematic diagram for explaining a deblurring operation according to Modification Example 1.
- FIG. 10 is a schematic diagram for explaining a difference in image forming position for each color light according to Modification 2.
- 10 is a schematic diagram for explaining a deblurring operation according to Modification 2.
- FIG. 10 is a schematic diagram for explaining a deblurring operation according to Modification 2.
- Embodiment an example of an imaging device that performs deblurring processing by switching correction filters for far points and near points
- Modification 1 Example in which deblurring processing is performed on imaging data acquired by thinning lines
- Modification 2 Example in which deblurring processing is applied to specific color components (G, B)) 4).
- FIG. 1 is a functional block diagram of an imaging apparatus (imaging apparatus 1) according to an embodiment of the present invention.
- the imaging device 1 includes an imaging lens 10, an imaging element 11, an image processing unit 12, an imaging element driving unit 13, an information detection unit 14, and a control unit 15.
- the imaging apparatus 1 is mounted on, for example, a camera-equipped mobile phone, and has a mode in which a two-dimensional barcode such as a QR code is photographed in close proximity (macro photography mode) and a mode in which an object at a distant location is photographed. (Normal shooting mode) can be switched. In the present embodiment, in the normal shooting mode, it is assumed that far shooting is performed when the subject is far away. On the other hand, in the macro shooting mode, close-up shooting with a subject close to the camera is assumed.
- a two-dimensional barcode such as a QR code
- the imaging lens 10 is a main lens for imaging an object (subject), and is, for example, a general fixed focus lens used in a video camera, a still camera, or the like.
- An aperture stop and a shutter (not shown) are disposed in the vicinity of the pupil plane of the imaging lens 10.
- the imaging element 11 is a photoelectric conversion element that stores electricity based on the received light rays, acquires imaging data (imaging data D0 described later) based on the passing rays of the imaging lens 10, and sends the imaging data to the image processing unit 12. It is designed to output.
- the image sensor 11 is composed of a solid-state image sensor such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device), and a plurality of unit pixels are arranged in an array on the image sensor 11.
- CMOS Complementary Metal Oxide Semiconductor
- CCD Charge Coupled Device
- FIG. 2 shows a circuit configuration of a unit pixel in the image sensor 11.
- the unit pixel 110 includes, for example, a photodiode 111, a transfer transistor 112, an amplification transistor 113, a select transistor 114, a reset transistor 115, and a floating node ND116.
- the photodiode 111 photoelectrically converts incident light into signal charges (for example, electric insulators) having a charge amount corresponding to the amount of light, and accumulates the signal light.
- the transfer transistor 112 has a source and a drain connected to the cathode of the photodiode 111 and the floating node ND116, respectively, and a gate connected to the transfer selection line TRFL.
- the transfer transistor 112 has a function of transferring signal charges accumulated in the photodiode 111 to the floating node ND116 by being turned on.
- the amplification transistor 113 and the select transistor 114 are connected in series between the power supply potential VDD and the signal line SGNL.
- the amplification transistor 113 has a gate connected to the floating node ND116, amplifies the potential of the floating node ND116, and outputs the amplified potential to the signal line SGNL via the select transistor 114.
- the gate of the select transistor 114 is connected to the select line SELL.
- the reset transistor 115 has a source connected to the floating node ND116, a drain connected to a predetermined potential line, a gate connected to the reset line RSTL, and a function of resetting the potential of the floating node ND116.
- FIG. 3 illustrates a Bayer color filter as an example.
- a Bayer array using two of the three primary colors (G: Green), one of red (R: Red), and one of blue (B: Blue) is used.
- This Bayer array is an array in which luminance resolution is more important than color.
- a pre-processing unit may be provided at the output stage of the image sensor 11.
- the pre-processing unit performs sampling processing and quantization processing on the analog signal read from the image sensor 11, converts the analog signal into a digital signal (A / D conversion), and performs image processing. Output to.
- the function of the pre-processing unit can be provided to the image sensor 11 itself.
- the image processing unit 12 performs predetermined image processing including blur correction processing (deblurring processing) on the imaging data (imaging data D0) supplied from the imaging device 11.
- image processing unit 12 performs predetermined image processing including blur correction processing (deblurring processing) on the imaging data (imaging data D0) supplied from the imaging device 11.
- blur correction processing deblurring processing
- the image sensor driving unit 13 performs drive control such as light receiving and reading operations of the image sensor 11.
- the information detection unit 14 detects a shooting mode setting signal (mode setting signal) in the imaging apparatus 1.
- This mode setting signal is input by a user, for example, through operation of buttons, keys, switches, etc. in a cellular phone or the like.
- the user selects (switches) either a normal shooting mode for distant shooting or a macro shooting mode for close-up shooting as the shooting mode.
- the information detection unit 14 outputs the detection result to the control unit 15 as a mode setting signal Dm, and the control unit 15 controls the image processing operation of the image processing unit 12 according to the selected shooting mode.
- the image processing unit 12 can select (switch) an appropriate correction coefficient according to the object distance by the control described above.
- the control unit 15 drives and controls the image sensor driving unit 13, the image processing unit 12, and the information detection unit 14, and is, for example, a microcomputer.
- FIG. 4 shows an example of the image processing unit 12.
- the image processing unit 12 includes, for example, a preprocessing unit 120, a storage unit 121, a deblurring processing unit 122, a correction coefficient selection unit 123, and a holding unit 124.
- the image processing unit 12 corresponds to a specific example of the image processing apparatus of the present invention.
- the preprocessing unit 120 performs various types of image processing such as defect correction processing, noise removal processing, white balance adjustment processing, shading correction processing, Y / C separation processing, and the like on the imaging data D0.
- Image data D1 is output to the storage unit 121.
- the storage unit 121 is a memory for temporarily storing the image data D1 output from the preprocessing unit 120 for the subsequent deblurring process.
- the storage means in the storage unit 121 may be a fixed disk or a removable disk, and various types such as a magnetic disk, an optical disk, a magneto-optical disk, and a semiconductor memory are used.
- the image data held in the storage unit 121 is output to the deblurring unit 122 as image data D 2 under the control of the correction coefficient selection unit 123.
- the storage unit 121 includes a plurality of line memories 121a, and image data based on the imaging data D0 is stored for a predetermined number of lines in the plurality of line memories 121a. .
- the number (number of lines) of the line memory 121a is set according to the size of the correction coefficient used in the deblurring processor 122.
- the correction coefficient is an inverse function of PSF, its size (two-dimensional expansion) differs depending on the object distance (imaging mode). That is, for each correction coefficient, the filter size at the time of deblurring processing described later differs, and the number of lines in the line memory 121a is set to a number corresponding to the filter size at the time of the deblurring processing.
- the deblurring processing unit 122 performs processing for correcting blur (deblurring processing) on the imaging data supplied from the storage unit 121. Specifically, the deblurring unit 122 performs the deblurring process by filtering using the correction coefficient (correction coefficient k1 or correction coefficient k2) supplied from the correction coefficient selecting unit 123, and the image data Dout after the deblurring process is obtained. Output.
- the correction coefficient selection unit 123 selects one of a plurality of correction coefficients for blur correction based on the control of the control unit 15, and outputs the selected correction coefficient to the deblurring unit 122.
- the plurality of correction coefficients are prepared for each of a plurality of different shooting modes, and differ depending on the object distance.
- a correction coefficient two correction coefficients, that is, a correction coefficient k1 for the normal shooting mode and a correction coefficient k2 for the macro shooting mode are used.
- the correction coefficient k1 is a correction coefficient for performing an appropriate deblurring process on image data of a distant object
- the correction coefficient k2 is a correction coefficient for performing an appropriate deblurring process on image data of a close object. is there.
- Each of these correction coefficients k1 and k2 is, for example, an inverse function of a point spread function (PSF: Point spread function).
- the PSF function is a function given by the following equation (1).
- h (x, y) the correction coefficients k1 and k2 are functions given by the following equation (2).
- f (x, y) ⁇ h (x, y) g (x, y) (1) 1 / h (x, y) (2)
- the holding unit 124 is a memory that holds the correction coefficients k1 and k2 as described above, and a storage unit similar to the storage unit 121 is used.
- the correction coefficients k1 and k2 may be stored in advance in the memory in the circuit as described above, but may be input from the outside via the information detection unit 14.
- FIG. 5A schematically shows a state of light acquisition in the normal photographing mode.
- the imaging apparatus 1 for example, in the normal shooting mode, the light beam L ⁇ b> 1 from the object (subject) passes through the imaging lens 10 and then reaches the imaging element 11.
- the received light signal is read out line-sequentially under the control of the image pickup device driving unit 13, thereby acquiring image pickup data D 0 (distant object image pickup data).
- the acquired imaging data D0 is supplied to the image processing unit 12, and the image processing unit 12 performs predetermined image processing on the imaging data D0 and outputs it as image data Dout.
- the image processing unit 12 performs the following deblurring processing on the imaging data D0 based on the mode setting signal Dm input via the information detection unit 14.
- the preprocessing unit 120 performs various image processing such as defect correction processing, noise removal processing, white balance adjustment processing, shading correction processing, and Y / C separation processing.
- the defect correction process is a process for correcting a defect included in the imaging data D0 (a defect caused by an abnormality in the element of the imaging element 11).
- the noise removal process is a process for removing noise included in the image data (for example, noise generated when an image is taken in a dark place or a place where sensitivity is insufficient).
- the white balance adjustment process is a process of adjusting a color balance that has been lost due to individual differences in devices such as the pass characteristics of the color filter and the spectral sensitivity of the image sensor 11 and illumination conditions.
- the shading correction process is a process for correcting unevenness of the luminance level in the captured image plane.
- the Y / C separation process is a process of separating the luminance signal (Y) and the chroma signal (C).
- color interpolation processing such as processing for setting the black level of each pixel data (clamp processing) and demosaic processing may be performed.
- the processed image data is output to the storage unit 121 as image data D1.
- the storage unit 121 stores the image data D1 input from the preprocessing unit 120 in the line memory 121a having a predetermined number of lines corresponding to the filter size at the time of deblurring using the subsequent correction coefficients k1 and k2. Hold temporarily.
- the number of lines in the line memory 121a provided in the storage unit 121 is, for example, as follows so as to be compatible with deblurring processing using any correction coefficient depending on the size of the correction coefficients k1 and k2 to be used.
- the filter size is relatively small (for example, the 3 ⁇ 3 kernel shown in FIG. 5C). In other words, in the normal shooting mode, since the degree of “blur” in the captured image is relatively small, the filter size may be small.
- FIG. 6A schematically shows the state of light beam acquisition in the macro photography mode.
- the imaging point f n of the object is separated from the light receiving surface S1 of the image sensor 11,
- the two-dimensional spread is larger than that of the far-point PSF (FIG. 6B).
- the filter size becomes relatively large (for example, the 5 ⁇ 5 kernel shown in FIG. 6C).
- the degree of “blur” in the captured image is relatively large, and thus the filter size is increased accordingly. Therefore, here, five line memories 121a are provided in consideration of the deblurring process (the deblurring process using the correction coefficient k2) in the macro shooting mode.
- the correction coefficient selection unit 123 determines the object distance from the correction coefficients k1 and k2 held in the holding unit 124 in accordance with the control of the control unit 15 based on the mode setting signal Dm. An appropriate correction coefficient corresponding to the above is selected and output to the deblurring processor 122. Specifically, when the mode setting signal Dm selects the normal shooting mode, the correction coefficient k1 for the far object is selected. On the other hand, when the mode setting signal Dm selects the macro shooting mode, the correction coefficient k2 for the proximity object is selected.
- the correction coefficient k1 is switched to the correction coefficient k2.
- the correction coefficient k2 is switched to the correction coefficient k1.
- the selected correction coefficient k1 (or correction coefficient k2) is output to the deblurring processor 122.
- the correction coefficient selection unit 123 outputs the image data D2 having the number of lines corresponding to the correction coefficient to be selected (according to the filter size at the time of the deblurring process) to the deblurring processing unit 122.
- the storage unit 121 is controlled. Specifically, as shown in FIG. 7A, in the normal shooting mode, pixel data D2a for three lines (a1 to a3) corresponding to the 3 ⁇ 3 filter size is converted into image data D2. As shown in FIG. On the other hand, as shown in FIG. 7B, in the macro shooting mode, pixel data D2b for 5 lines (b1 to b5) corresponding to a 5 ⁇ 5 filter size is used as image data D2. Output to the deblurring unit 122.
- the deblurring unit 122 performs a deblurring process on the image data D2 for a predetermined line supplied from the storage unit 121 using the correction coefficient supplied from the correction coefficient selecting unit 123. Specifically, in the normal shooting mode, blur correction is performed by multiplying the image data D2 for three lines by the correction coefficient k1. On the other hand, in the macro shooting mode, blur correction is performed by multiplying the image data D2 for five lines by the correction coefficient k2. By such a deblurring process, the blur in the captured image corresponding to the image data D2 is satisfactorily improved, and the image data Dout focused on the subject is output.
- the correction coefficient k1 for the far object and the correction coefficient k2 for the near object are selected for the imaging data D0 based on the mode setting signal Dm. Either one is selected, and the deblurring process is performed based on the selected correction coefficient.
- the correction coefficients k1 and k2 as the correction coefficient used in the deblurring process, appropriate blur correction can be performed on each of the imaging data of both the far object and the close object. Can be applied. Accordingly, it is possible to focus on the far point and the near point without using a special imaging optical system such as an autofocus lens or a focus switching lens. Therefore, it is possible to realize far-distance shooting and close-up shooting with a low-cost and simple configuration.
- FIG. 8A shows an example of a captured image of a QR code before deblurring processing
- FIG. 8B shows an example of a captured image of a QR code after deblurring processing.
- modified examples modified examples 1 and 2 of the above embodiment will be described.
- the same components as those in the above embodiment are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- FIG. 9 is a schematic diagram for explaining the deblurring operation according to the first modification.
- the number of lines in the line memory is set according to the filter size corresponding to the correction coefficient. That is, the number of lines is five in accordance with the correction coefficient k2 that requires a large filter size so that it can be applied to deblurring processing using either of the correction coefficients k1 and k2, but the number of lines in the line memory is reduced.
- the filter size may be reduced.
- the number of lines in the line memory may be three in accordance with the filter size (3 ⁇ 3) in the normal photographing mode (when the correction coefficient k1 is used).
- the filter size 3 ⁇ 3 in the normal photographing mode (when the correction coefficient k1 is used).
- 5 lines are originally required as described above, but by thinning out 2 lines, it is possible to correspond to a line memory of 3 lines.
- a predetermined line (b2, b4) is thinned out of the pixel data for five lines (b1 to b5) (writing to the line memory is performed every other line).
- the pixel data for three lines (b1, b3, b5) is set as the deblurring processing target.
- the deblurring process can be performed with a 3 ⁇ 5 filter size.
- the number of lines in the line memory in the storage unit 121 is not necessarily matched with the macro shooting mode having a large filter size. Since the macro shooting mode is a shooting mode mainly used for data recognition such as reading of a QR code, it is sufficient that a resolution that can recognize such data is secured in a captured image. is there. For this reason, as in this modification, a predetermined line may be thinned out and held in the line memory of the storage unit 121. Even with such a configuration, it is possible to obtain the same effect as that of the above-described embodiment, and it is possible to reduce the filter size during the deblurring process as compared with the above-described embodiment.
- the number of lines in the line memory in the storage unit 121 is not limited to “5” or “3” described above, and may be other numbers of lines. It can be set as appropriate according to the size of the PSF of the correction coefficient to be used.
- the number of lines in the line memory can be reduced by, for example, mixing pixel data in the image sensor 11 or pixel data in digital processing. It can also be realized by adding.
- ⁇ Modification 2> since the color filter formed by arranging filters of the three primary colors R, G, and B is provided on the light receiving surface of the image sensor 11, pixel data of all three colors are subject to deblurring processing. However, as described below, only a specific color component may be the target of the deblurring process. For example, when the color arrangement of the color filter is a Bayer arrangement, only the green component having a resolution may be the target of the deblurring process. In this case, for example, as shown in FIG. 10, among the pixel data arranged according to the color arrangement of the color filter, rearrangement is performed so that only the G data is arranged, thereby reducing the number of lines. (Halved).
- the blue light L B, the green light L G, the imaging position for each color light of the red light L R (f B, f G , f R) is a difference occurs most focus easily fit in the blue light L B of these. For this reason, only the blue component of the imaging data D0 may be the target of the deblurring process. In this case, as in the case of using only the green component, the number of lines can be reduced by rearranging the data array of only B data.
- these blue and green components may be subject to deblurring.
- the two shooting modes can be switched between the normal shooting mode and the macro shooting mode as the shooting mode, and the two correction coefficients can be switched according to each shooting mode.
- the number of correction coefficients to be performed may be three or more.
- the deblurring process is performed by thinning a predetermined line in the imaging data D0, or only specific color components of the R, G, and B pixel data are targeted for the deblurring process.
- the luminance signal after Y / C separation may be the deblurring target.
- the Y / C separation process is performed in the preprocessing unit 120 in the image processing unit 12 to separate the luminance signal (Y) and the color signal (C)
- only the luminance component is stored in the line of the storage unit 121.
- a predetermined number of lines may be stored in the memory.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Color Television Image Signal Generators (AREA)
- Image Processing (AREA)
- Automatic Focus Adjustment (AREA)
Abstract
Description
1.実施の形態(遠点用と近点用の補正フィルタを切り替えてデブラー処理を行う撮像装置の例)
2.変形例1(ラインを間引いて取得した撮像データにデブラー処理を施す例)
3.変形例2(特定の色成分(G,B)にデブラー処理を施す例)
4.その他の変形例
[撮像装置1の全体構成]
図1は、本発明の一実施の形態に係る撮像装置(撮像装置1)の機能ブロック図である。撮像装置1は、撮像レンズ10、撮像素子11、画像処理部12、撮像素子駆動部13、情報検出部14および制御部15を備えたものである。この撮像装置1は、例えばカメラ付き携帯電話機等に搭載されるものであり、QRコード等の2次元バーコードを近接して撮影するモード(マクロ撮影モード)と、遠方にある物体を撮影するモード(通常撮影モード)とを切り替え可能となっている。本実施の形態において、通常撮影モードでは、被写体が遠方にある場合の遠方撮影を想定している。他方、マクロ撮影モードでは、被写体が近接している状態での近接撮影を想定している。
図4に、この画像処理部12の一例について示す。画像処理部12は、例えば、前処理部120、記憶部121、デブラー処理部122、補正係数選択部123および保持部124を備えている。尚、この画像処理部12が、本発明の画像処理装置の一具体例に相当する。
f(x,y)×h(x,y)=g(x,y) ………(1)
1/h(x,y) ………(2)
図5(A)は、通常撮影モードにおける光線取得の様子を模式的に表したものである。このように、撮像装置1では、例えば通常撮影モードにおいて、物体(被写体)からの光線L1は、撮像レンズ10を通過した後、撮像素子11へ到達する。撮像素子11では、撮像素子駆動部13の制御により、受光信号がライン順次に読み出されることにより、撮像データD0(遠方物体の撮像データ)を取得する。取得された撮像データD0は、画像処理部12に供給され、画像処理部12が、その撮像データD0に対し、所定の画像処理を施し、画像データDoutとして出力する。
具体的には、まず、デブラー処理の前処理として、前処理部120が、例えば欠陥補正処理、ノイズ除去処理、ホワイトバランス調整処理、シェーディング補正処理、Y/C分離処理等の各種画像処理を施す。具体的には、欠陥補正処理は、撮像データD0に含まれる欠陥(撮像素子11の素子自体の異常に起因した欠陥)を補正する処理である。ノイズ除去処理は、撮像データに含まれるノイズ(例えば、暗い場所や感度の足りない場所で撮像したときに発生するノイズ)を除去する処理である。ホワイトバランス調整処理は、カラーフィルタの通過特性や撮像素子11の分光感度等のデバイスの個体差や照明条件に起因して崩れた色バランスを調整する処理である。シェーディング補正処理は、撮像画像面内における輝度レベルのむらを補正する処理である。Y/C分離処理は、輝度信号(Y)とクロマ信号(C)とを分離する処理である。尚、この他にも、各画素データの黒レベルを設定する処理(クランプ処理)や、デモザイク処理等のカラー補間処理を施すようにしてもよい。これらの処理後の撮像データは、画像データD1として記憶部121へ出力される。
記憶部121は、前処理部120から入力された画像データD1を、後段の補正係数k1,k2を用いたデブラー処理時のフィルタサイズに対応して、所定ライン数のラインメモリ121aに格納して一時的に保持する。
本実施の形態では、補正係数選択部123が、上述のモード設定信号Dmに基づく制御部15の制御に応じて、保持部124に保持されている補正係数k1,k2の中から、その物体距離に応じた適切な補正係数を選択し、デブラー処理部122へ出力する。具体的には、モード設定信号Dmが通常撮影モードを選択するものである場合には、遠方物体用の補正係数k1を選択する。一方、モード設定信号Dmがマクロ撮影モードを選択するものである場合には、近接物体用の補正係数k2を選択する。これにより、例えば、通常撮影モードにおいて、マクロ撮影モードを設定する旨のモード設定信号が検出された場合には、補正係数k1が補正係数k2へ切り替えられる。同様に、マクロ撮影モードにおいて、通常撮影モードを設定する旨のモード設定信号が検出された場合には、補正係数k2が補正係数k1へ切り替えられる。選択された補正係数k1(または補正係数k2)は、デブラー処理部122へ出力される。
デブラー処理部122は、補正係数選択部123により供給された補正係数を用いて、記憶部121から供給された所定のライン分の画像データD2に対し、デブラー処理を施す。具体的には、通常撮影モードの場合には、3ライン分の画像データD2に対して、補正係数k1を乗じることにより、ぼけ補正を行う。他方、マクロ撮影モードの場合には、5ライン分の画像データD2に対し、補正係数k2を乗じることにより、ぼけ補正を行う。 このようなデブラー処理により、画像データD2に対応する撮像画像においてぼけが良好に改善され、被写体にフォーカスした(ピントの合った)画像データDoutが出力される。
図9は、変形例1に係るデブラー処理動作を説明するための模式図である。上記実施の形態では、ラインメモリのライン数を、補正係数に応じたフィルタサイズに合わせて設定した。即ち、補正係数k1,k2のどちらを用いたデブラー処理にも適用できるように、大きなフィルタサイズを必要とする補正係数k2に合わせてライン数を5つとしたが、ラインメモリのライン数を少なくして、フィルタサイズが小さくなるようにしてもよい。
また、上記実施の形態では、撮像素子11の受光面にR,G,Bの3原色のフィルタが配列してなるカラーフィルタを設けているため、3色全ての画素データがデブラー処理の対象となるが、以下のように、特定の色成分のみをデブラー処理の対象としてもよい。例えば、カラーフィルタの色配列がベイヤー配列の場合には、解像度の取れる緑色成分のみをデブラー処理の対象としてもよい。この場合、例えば、図10に示したように、カラーフィルタの色配列に応じて配列する画素データのうち、Gデータのみのデータ配列となるように並べ替えを行うことで、ライン数を少なくする(1/2にする)ことができる。
以上実施の形態および変形例を挙げて本発明を説明したが、本発明は上記実施の形態等に限定されるものではなく、種々変形が可能である。例えば、上記実施の形態等では、撮影モードとして、通常撮影モードとマクロ撮影モードの2つのモード切り替えを行い、各撮影モードに応じた2つの補正係数を切り替え可能としたが、撮影モードおよびそれに対応する補正係数の数は、3つ以上であってもよい。
Claims (10)
- 撮像レンズと、
前記撮像レンズの通過光線に基づく撮像データを取得する撮像素子と、
前記撮像データに基づく撮像画像に対して画像処理を施す画像処理部とを備え、
前記画像処理部は、
前記撮像画像に対してぼけ補正を施すデブラー処理部と、
各々が前記撮像レンズから被写体までの物体距離に応じて設定された複数のぼけ補正係数の中から1つを選択し、選択したぼけ補正係数を前記デブラー処理部へ出力する補正係数選択部と
を有する撮像装置。 - 前記複数の補正係数は、近接物体用の第1の補正係数と、遠方物体用の第2の補正係数とを少なくとも含む
請求項1に記載の撮像装置。 - 前記複数の補正係数はそれぞれ、点像分布関数(PSF: Point spread function)の逆関数である
請求項1に記載の撮像装置。 - 複数のラインメモリを有し、前記複数のラインメモリに前記撮像データを格納させて保持する記憶部を更に備え、
前記記憶部は、前記複数のラインメモリに、前記撮像データのうち前記補正係数の2次元サイズに対応したライン数の画素データを格納する
請求項1に記載の撮像装置。 - 複数のラインメモリを有し、前記複数のラインメモリに前記撮像データを格納させて保持する記憶部を更に備え、
前記記憶部は、前記複数のラインメモリに、前記撮像データのうち所定のラインの画素データを間引いて格納する
請求項1に記載の撮像装置。 - 前記撮像データが赤(R)、緑(G)、青(B)の3つの色信号を含み、
前記デブラー処理部は、前記撮像データのうちの特定の色信号のみに対してぼけ補正を施す
請求項1に記載の撮像装置。 - 前記特定の色信号は、青色信号および緑色信号の一方または両方である
請求項6に記載の撮像装置。 - 前記撮像データが輝度信号および色信号に分離され、
前記デブラー処理部は、分離後の輝度信号のみに対してぼけ補正を施す
請求項1に記載の撮像装置。 - 前記補正係数選択部は、外部から入力されるモード設定信号に基づき、前記1つの補正係数を選択する
請求項1に記載の撮像装置。 - 撮像素子によって取得され、撮像レンズの通過光線に基づく撮像画像に対し、ぼけ補正を施すデブラー処理部と、
各々が前記撮像レンズから被写体までの物体距離に応じて設定された複数のぼけ補正係数の中から1つを選択し、前記デブラー処理部へ出力する補正係数選択部と
を備えた画像処理装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/695,946 US8937680B2 (en) | 2010-05-12 | 2011-04-28 | Image pickup unit and image processing unit for image blur correction |
EP11780539A EP2571247A1 (en) | 2010-05-12 | 2011-04-28 | Imaging device and image processing device |
CN2011800323248A CN103039067A (zh) | 2010-05-12 | 2011-04-28 | 图像拾取单元和图像处理单元 |
KR1020127028746A KR20130069612A (ko) | 2010-05-12 | 2011-04-28 | 촬상 장치 및 화상 처리 장치 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-110368 | 2010-05-12 | ||
JP2010110368A JP5454348B2 (ja) | 2010-05-12 | 2010-05-12 | 撮像装置および画像処理装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011142282A1 true WO2011142282A1 (ja) | 2011-11-17 |
Family
ID=44914338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/060467 WO2011142282A1 (ja) | 2010-05-12 | 2011-04-28 | 撮像装置および画像処理装置 |
Country Status (7)
Country | Link |
---|---|
US (1) | US8937680B2 (ja) |
EP (1) | EP2571247A1 (ja) |
JP (1) | JP5454348B2 (ja) |
KR (1) | KR20130069612A (ja) |
CN (1) | CN103039067A (ja) |
TW (1) | TWI458342B (ja) |
WO (1) | WO2011142282A1 (ja) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE112013004760B4 (de) * | 2012-09-27 | 2017-10-19 | Fujifilm Corporation | Bilderfassungsvorrichtung und Bildverarbeitungsverfahren |
EP3826338A1 (en) * | 2014-11-24 | 2021-05-26 | Samsung Electronics Co., Ltd. | Selecting a service contract option for a wearable electronic device |
KR102390980B1 (ko) * | 2015-07-24 | 2022-04-26 | 엘지디스플레이 주식회사 | 영상 처리 방법, 영상 처리 회로와, 그를 이용한 표시 장치 |
US11165932B2 (en) | 2015-12-22 | 2021-11-02 | Kyocera Corporation | Imaging apparatus, imaging system, vehicle and foreign matter determination method |
US10244180B2 (en) * | 2016-03-29 | 2019-03-26 | Symbol Technologies, Llc | Imaging module and reader for, and method of, expeditiously setting imaging parameters of imagers for imaging targets to be read over a range of working distances |
CN108076267A (zh) * | 2016-11-11 | 2018-05-25 | 株式会社东芝 | 摄像装置、摄像系统以及距离信息获取方法 |
CN107302632A (zh) * | 2017-06-28 | 2017-10-27 | 努比亚技术有限公司 | 一种移动终端拍摄方法、移动终端及计算机可读存储介质 |
CN115699012A (zh) | 2020-06-02 | 2023-02-03 | 索尼集团公司 | 信息处理装置、信息处理方法及程序 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003287505A (ja) * | 2002-03-27 | 2003-10-10 | Toshiba Corp | 欠陥検査装置 |
WO2006022373A1 (ja) * | 2004-08-26 | 2006-03-02 | Kyocera Corporation | 撮像装置および撮像方法 |
JP2007206738A (ja) * | 2006-01-30 | 2007-08-16 | Kyocera Corp | 撮像装置およびその方法 |
JP2008042874A (ja) * | 2006-07-14 | 2008-02-21 | Eastman Kodak Co | 画像処理装置、画像復元方法およびプログラム |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3630169B2 (ja) * | 1993-10-25 | 2005-03-16 | コニカミノルタホールディングス株式会社 | 画像処理装置 |
US20010013895A1 (en) * | 2000-02-04 | 2001-08-16 | Kiyoharu Aizawa | Arbitrarily focused image synthesizing apparatus and multi-image simultaneous capturing camera for use therein |
JP2005286536A (ja) * | 2004-03-29 | 2005-10-13 | Fujinon Corp | 撮像装置 |
US8036481B2 (en) * | 2006-07-14 | 2011-10-11 | Eastman Kodak Company | Image processing apparatus and image restoration method and program |
JP4453734B2 (ja) * | 2007-09-21 | 2010-04-21 | ソニー株式会社 | 画像処理装置、画像処理方法、及び画像処理プログラム、並びに撮像装置 |
JP5235642B2 (ja) * | 2008-12-15 | 2013-07-10 | キヤノン株式会社 | 画像処理装置およびその方法 |
-
2010
- 2010-05-12 JP JP2010110368A patent/JP5454348B2/ja not_active Expired - Fee Related
-
2011
- 2011-04-28 US US13/695,946 patent/US8937680B2/en not_active Expired - Fee Related
- 2011-04-28 EP EP11780539A patent/EP2571247A1/en not_active Withdrawn
- 2011-04-28 CN CN2011800323248A patent/CN103039067A/zh active Pending
- 2011-04-28 WO PCT/JP2011/060467 patent/WO2011142282A1/ja active Application Filing
- 2011-04-28 KR KR1020127028746A patent/KR20130069612A/ko not_active Application Discontinuation
- 2011-05-03 TW TW100115500A patent/TWI458342B/zh not_active IP Right Cessation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003287505A (ja) * | 2002-03-27 | 2003-10-10 | Toshiba Corp | 欠陥検査装置 |
WO2006022373A1 (ja) * | 2004-08-26 | 2006-03-02 | Kyocera Corporation | 撮像装置および撮像方法 |
JP2007206738A (ja) * | 2006-01-30 | 2007-08-16 | Kyocera Corp | 撮像装置およびその方法 |
JP2008042874A (ja) * | 2006-07-14 | 2008-02-21 | Eastman Kodak Co | 画像処理装置、画像復元方法およびプログラム |
Also Published As
Publication number | Publication date |
---|---|
TWI458342B (zh) | 2014-10-21 |
CN103039067A (zh) | 2013-04-10 |
TW201204020A (en) | 2012-01-16 |
KR20130069612A (ko) | 2013-06-26 |
EP2571247A1 (en) | 2013-03-20 |
JP5454348B2 (ja) | 2014-03-26 |
US8937680B2 (en) | 2015-01-20 |
US20130050543A1 (en) | 2013-02-28 |
JP2011239292A (ja) | 2011-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5026951B2 (ja) | 撮像素子の駆動装置、撮像素子の駆動方法、撮像装置、及び撮像素子 | |
JP5454348B2 (ja) | 撮像装置および画像処理装置 | |
US7978240B2 (en) | Enhancing image quality imaging unit and image sensor | |
JP4691930B2 (ja) | 物理情報取得方法および物理情報取得装置、並びに物理量分布検知の半導体装置、プログラム、および撮像モジュール | |
US12107098B2 (en) | Image sensor, focus adjustment device, and imaging device | |
US20240089629A1 (en) | Image sensor and imaging device | |
JP6442362B2 (ja) | 撮像装置及び撮像素子の制御方法 | |
WO2014041845A1 (ja) | 撮像装置及び信号処理方法 | |
JP4609092B2 (ja) | 物理情報取得方法および物理情報取得装置 | |
US7349015B2 (en) | Image capture apparatus for correcting noise components contained in image signals output from pixels | |
TWI390972B (zh) | The autofocus method in the high noise environment and its application of the digital acquisition device of the method | |
JP5452269B2 (ja) | 撮像装置 | |
JP6594048B2 (ja) | 撮像装置及びその制御方法 | |
JP6992877B2 (ja) | 撮像素子および撮像装置 | |
CN112770069A (zh) | 图像装置 | |
JP5311927B2 (ja) | 撮像装置、撮像方法 | |
JP2013197612A (ja) | 撮像装置、画像処理装置およびプログラム | |
JP5629568B2 (ja) | 撮像装置及びその画素加算方法 | |
WO2021192176A1 (ja) | 撮像装置 | |
JP2007143067A (ja) | 撮像装置及び撮像システム | |
JP2024160294A (ja) | 撮像素子、および撮像装置 | |
JP2016225848A (ja) | 撮像装置及び撮像素子の制御方法 | |
JP5637326B2 (ja) | 撮像装置 | |
JP2010251855A (ja) | 撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180032324.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11780539 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20127028746 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011780539 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13695946 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |