WO2006064751A1 - 複眼撮像装置 - Google Patents
複眼撮像装置 Download PDFInfo
- Publication number
- WO2006064751A1 WO2006064751A1 PCT/JP2005/022751 JP2005022751W WO2006064751A1 WO 2006064751 A1 WO2006064751 A1 WO 2006064751A1 JP 2005022751 W JP2005022751 W JP 2005022751W WO 2006064751 A1 WO2006064751 A1 WO 2006064751A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- imaging
- amount
- blur
- pixel
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 225
- 230000003287 optical effect Effects 0.000 claims abstract description 69
- 150000001875 compounds Chemical class 0.000 claims description 20
- 230000008859 change Effects 0.000 claims description 10
- 230000002194 synthesizing effect Effects 0.000 claims description 9
- 238000009795 derivation Methods 0.000 claims description 7
- 230000015572 biosynthetic process Effects 0.000 claims description 5
- 238000003786 synthesis reaction Methods 0.000 claims description 5
- 238000000034 method Methods 0.000 description 59
- 239000011295 pitch Substances 0.000 description 43
- 238000010586 diagram Methods 0.000 description 25
- 238000011156 evaluation Methods 0.000 description 24
- 230000000875 corresponding effect Effects 0.000 description 14
- 230000008569 process Effects 0.000 description 14
- 230000007246 mechanism Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 13
- 238000006243 chemical reaction Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 11
- 239000011521 glass Substances 0.000 description 11
- 239000000203 mixture Substances 0.000 description 9
- 238000012937 correction Methods 0.000 description 7
- 238000000926 separation method Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- 239000003086 colorant Substances 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 230000006866 deterioration Effects 0.000 description 2
- 239000005304 optical glass Substances 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/58—Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/13—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
- H04N23/16—Optical arrangements associated therewith, e.g. for beam-splitting or for colour correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
Definitions
- the present invention relates to a compound eye imaging apparatus having a pixel shifting function.
- An imaging device used for a portable device is required to achieve both high resolution and small size.
- the downsizing of the imaging device is limited by the size and focal length of the imaging optical lens and the size of the imaging device.
- the optical system of a normal imaging device has a configuration in which a plurality of lenses are stacked in order to form red, green, and blue wavelengths of light on the same imaging surface.
- the optical system of the imaging device is inevitably long and the imaging device is thick. Therefore, a compound eye type imaging apparatus using a single lens with a short focal length has been proposed as an effective technique for downsizing, in particular, thinning of the imaging apparatus (for example, Patent Document 1).
- an imaging optical system is arranged in a plane with a lens that handles light of a blue wavelength, a lens that handles light of a green wavelength, and a lens that handles light of a red wavelength.
- the imaging region is provided for each lens.
- a single imaging element may be divided into a plurality of areas simply by arranging a plurality of imaging elements side by side.
- a subject image can be formed on the imaging surface by a single lens, and the thickness of the imaging device can be significantly reduced.
- FIG. 19 is a schematic perspective view of the main part of an example of a conventional compound eye type imaging apparatus.
- Reference numeral 1900 denotes a lens array, which is formed by three lenses 1901a, 1901b, and 1901c.
- Reference numeral 1901a denotes a lens that handles light of a red wavelength, and converts the formed subject image into image information in an imaging region 1902a in which a red wavelength separation filter (color filter) is attached to the light receiving unit.
- 1901b is a lens that handles light of the green wavelength. The image is converted into green image information in the imaging area 1902b
- 1901c is a lens corresponding to light of a blue wavelength, and is converted into blue image information in the imaging area 1902c.
- the compound-eye imaging device can reduce the thickness of the imaging device, but when the images of each color are simply superimposed and synthesized, the images are synthesized according to the number of pixels of the image separated into each color. The resolution of the image will be determined. For this reason, there is a problem that the resolution is inferior to that of an ordinary Bayer image pickup device in which green, red and blue filters are arranged in a staggered manner.
- FIG. 20 is a conceptual explanatory diagram of high resolution using pixel shifting. This figure shows a part of the enlarged portion of the image sensor.
- the image sensor has a photoelectric conversion unit 2101 (hereinafter referred to as “photoelectric conversion unit”) that converts received light into an electric signal, and light from a transfer electrode or the like can be converted into an electric signal.
- photoelectric conversion unit There is an invalid part 2102 (hereinafter referred to as “invalid part”).
- the photoelectric conversion unit 2101 and the invalid portion 2102 are combined to form one pixel. These pixels are usually formed regularly at a certain interval (pitch).
- the part surrounded by the thick line in Fig. 20A is one pixel, and P indicates one pitch.
- FIG. 20A An outline of pixel shifting performed using such an image sensor is shown below.
- photographing is performed at the position of the image sensor shown in FIG. 20A.
- FIG. 20B the photoelectric conversion unit 2101 of each pixel is moved in an oblique direction (1Z2 pitch of the pixel in both the horizontal direction and the vertical direction) so as to move to the invalid portion 2102, and shooting is performed. To do.
- these two captured images are combined as shown in FIG. 20C.
- the imaging state in FIG. 20C has the same resolution as that captured by the imaging element having twice the photoelectric conversion unit as compared to the imaging state for one time captured by the imaging element in FIG. 20A. Therefore, if the image shift as described above is performed, an image equivalent to an image shot using an image sensor having twice the number of pixels without increasing the number of pixels of the image sensor is obtained. An image can be obtained.
- the resolution is not limited to the case of shifting in the oblique direction as illustrated, but the resolution can be improved in the shifted direction when shifting in the horizontal direction and the vertical direction. For example, when the shift is combined vertically and horizontally, a resolution of 4 times can be obtained. Also, the amount of pixel shift need not be limited to 0.5 pixels, and the resolution can be further improved by finely shifting pixels so as to interpolate invalid portions.
- the method of shifting the force pixels by changing the relative positional relationship between the image sensor and the incident light beam by moving the image sensor is not limited to this method.
- the optical lens may be moved instead of the image sensor.
- a method using a parallel plate has been proposed (for example, Patent Document 1).
- the image formed on the image sensor is shifted by inclining parallel plates.
- Another method is to detect and correct camera shake using shake detection means such as an angular velocity sensor.
- shake detection means such as an angular velocity sensor.
- a method of correcting by using both the camera shake correction mechanism and the pixel shifting mechanism has been proposed (for example, Patent Document 2 and Patent Document 3).
- the amount of shake is detected using the shake detection means, and the shake is detected. After correcting the pixel shift direction and pixel shift amount based on the amount, the image sensor is moved to shift the pixel. By doing so, it is possible to reduce the influence of camera shake.
- Patent Document 3 it is not necessary to be limited to the method of moving the image sensor. By moving a part of the optical lens in accordance with the detected amount of shake, camera shake correction and pixel shift are performed. The same effect is obtained.
- Various methods have been proposed for blur detection, such as a method using an angular velocity sensor such as a vibration gai mouth, and a method for comparing motion images taken in time series to obtain a motion beta.
- Patent Document 3 compares a plurality of images taken in time series, and the positional relationship of the images is appropriately shifted due to camera shake or the like, thereby improving resolution.
- a method has been proposed in which only images having a relationship that can be expected are selected and combined. This method is all performed electronically, and can reduce the size of an imaging apparatus that does not require the provision of a mechanical mechanism for correcting camera shake.
- Patent Documents 2 and 3 which detects camera shake, and performs camera shake correction and pixel shifting, requires a new sensor and requires a complicated optical system. , Disadvantageous for miniaturization and thinning.
- Patent Document 1 JP-A-6-261236
- Patent Document 2 Japanese Patent Laid-Open No. 11-225284
- Patent Document 3 Japanese Patent Laid-Open No. 10-191135
- the present invention solves the above-described conventional problems, and in a compound eye imaging device that performs pixel shifting, a compound eye that can prevent a decrease in the effect of pixel shifting even when there is camera shake or subject blurring.
- An object is to provide an imaging device.
- the compound eye imaging apparatus of the present invention is a compound eye imaging apparatus including a plurality of imaging systems each including an optical system and an imaging element and having different optical axes, and the plurality of imaging systems. Includes a first imaging system having a pixel shifting unit that changes a relative positional relationship between an image formed on the image sensor and the image sensor, an image formed on the image sensor, and the image sensor.
- the second imaging system is characterized in that the relative positional relationship is fixed in time-series imaging.
- FIG. 1 is a block diagram showing a configuration of an imaging apparatus according to Embodiment 1 of the present invention.
- FIG. 2 is a flowchart showing the overall operation of the imaging apparatus according to Embodiment 1 of the present invention.
- FIG. 3 is a diagram showing a positional relationship between a comparison source region and an evaluation region according to an embodiment of the present invention.
- FIG. 4 is a diagram showing image movement due to camera shake in the embodiment of the present invention.
- FIG. 5 is a diagram for explaining adjustment of a pixel shift amount in one embodiment of the present invention.
- FIG. 6 is a configuration diagram of an image pickup optical system, a pixel shifting unit, and an image pickup element according to Embodiment 1 of the present invention.
- FIG. 7 is a block diagram showing a configuration of an imaging apparatus according to Embodiment 2 of the present invention.
- FIG. 8 is a flowchart of the overall operation of the imaging apparatus according to Embodiment 2 of the present invention.
- FIG. 9 is a diagram for explaining parallax in an embodiment of the present invention.
- FIG. 10 is a diagram for explaining a method for selecting an optimum image in one embodiment of the present invention.
- FIG. 11 is another diagram for explaining a method for selecting an optimum image in the embodiment of the present invention.
- FIG. 12 is a diagram showing an image stored in the image memory after pixel shifting once in Embodiment 2 of the present invention.
- FIG. 13 is a diagram showing images taken in time series with the second imaging system without pixel shift stored in the image memory in Example 3 of the present invention.
- FIG. 14 is a diagram showing an image photographed in Example 3 of the present invention and a subject group determined by subject determination means.
- FIG. 15 is a configuration diagram of an image pickup optical system, a pixel shifting unit, and an image pickup device according to Example 5 of the present invention.
- FIG. 16 is a plan view of a piezoelectric fine movement mechanism according to an embodiment of the present invention.
- FIG. 17 is a diagram showing an example of an arrangement of an optical system according to an embodiment of the present invention.
- FIG. 18 is a flowchart of the entire operation in the imaging apparatus according to Embodiment 3 of the present invention.
- FIG. 19 is a schematic perspective view of a main part of an example of a conventional compound-eye imaging device.
- FIG. 20 is a conceptual explanatory diagram of high resolution using conventional pixel shifting.
- the image pickup apparatus using the compound eye type optical system is reduced in size and thickness, and images taken in time series by the second image pickup system that does not shift pixels are compared.
- the amount of camera shake (the amount of camera shake) can be detected. Using this amount of camera shake, camera shake can be corrected for images shot with the first imaging system that shifts pixels. That is, it is possible to achieve both downsizing and thinning of the imaging device and high resolution.
- the amount of blur is compared by comparing the image memory storing the image information of a plurality of frames taken in time series with the image information of the plurality of frames stored in the image memory. It is preferable that the camera further includes a blur amount deriving unit that derives the image and an image synthesizing unit that synthesizes the images of the plurality of frames stored in the image memory.
- the amount of change in the positional relationship by the pixel shifting unit is determined based on the amount of blur obtained by the blur amount deriving unit. According to this configuration, the pixel shift amount can be adjusted according to the amount of camera shake, which is advantageous for improving the resolution.
- the amount of change in the positional relationship by the pixel shifting unit may be fixed. According to this configuration, it is not necessary to derive the blur amount during shooting and adjust the pixel shift amount, and the time-series shooting time interval can be shortened. This reduces camera shake and enables shooting even when the subject moves quickly.
- the present invention further includes a parallax amount deriving unit that obtains the magnitude of the parallax for the image forces captured by the plurality of imaging systems having different optical axes, and the image synthesizing unit is obtained by the parallax amount deriving unit. It is preferable that the image is corrected and synthesized based on the parallax amount and the blur amount obtained by the blur amount deriving unit. According to this configuration, when correcting an image, blur correction is performed. In addition, since the parallax correction depending on the distance of the subject is also corrected, the resolution of the synthesized image can be further increased. That is, it is possible to prevent a decrease in resolution depending on the distance of the subject.
- the image processing apparatus further includes an optimum image selection unit that selects image information used for the composition of the image composition unit from the information and image information captured by the second imaging system.
- the first and second imaging systems can obtain images before and after blurring, images with parallax, and images with pixel shifts, so images that are suitable for improving resolution without depending on chance. You can choose.
- the blur amount deriving unit derives a blur amount for each of the different subjects
- the image composition unit composes an image for each of the different subjects. It is preferable. According to this configuration, by deriving the blur amount for each subject, the resolution can be improved even when the entire image does not move uniformly due to the subject moving.
- the image processing apparatus further includes means for dividing the image information into a plurality of blocks, the blur amount deriving unit derives a blur amount for each of the plurality of blocks, and the image synthesizing unit includes the plurality of blocks. It is preferable to synthesize an image every time. Also with this configuration, it is possible to improve the resolution when the amount of movement of the subject exists. Furthermore, it is not necessary to detect the subject, and the processing time can be shortened.
- the plurality of imaging systems having different optical axes include an imaging system that handles red, an imaging system that handles green, and an imaging system that handles blue, and each of the imaging systems corresponding to the respective colors.
- the number of imaging systems corresponding to at least one color is two or more, and the two or more imaging systems that handle the same color include the first imaging system and the second imaging system. It is preferable that According to this configuration, a color image with improved resolution can be obtained.
- FIG. 1 is a block diagram illustrating a configuration of the imaging apparatus according to the first embodiment.
- System The control unit 100 is a central processing unit (CPU) that controls the entire imaging apparatus.
- the system control means 100 controls the pixel shifting means 101, the transfer means 102, the image memory 103, the blur amount deriving means 104, and the image composition means 105.
- a subject to be photographed (not shown) is photographed by the first imaging system 106b having the pixel shifting means 101 and the second imaging system 106a having no pixel shifting function.
- the imaging optical system 107a and the imaging optical system 107b the subject forms an image on the imaging elements 108a and 108b and is converted into image information as a light intensity distribution.
- the pixel shifting means 101 shifts the relative positional relationship between the object image formed on the image sensor 108b by the image pickup optical system 107b and the image sensor 108b in the in-plane direction of the image sensor 108b. It is. That is, the pixel shift means 101 can change the relative positional relationship between the image sensor 108b and the incident light beam incident on the image sensor 108b in time-series imaging.
- the positional relationship between the imaging optical system 107a and the imaging element 108a is set so as not to deviate in the in-plane direction of the imaging element 108a. Therefore, the relative positional relationship between the subject image formed on the image sensor 108a by the imaging optical system 107a and the image sensor 108a is fixed in time-series imaging. That is, in the second imaging system 106a, the relative positional relationship between the imaging device 108a and the incident light incident on the imaging device 108a is fixed in time-series imaging.
- the transfer means 102 transmits the image information photoelectrically converted by the image sensors 108a and 108b to an image memory 103 that stores an image.
- the first imaging system 106b and the second imaging system 106a are individually driven, and each image is sequentially transferred to the image memory 103 and stored. As will be described later, the pixel shift amount is adjusted while detecting the blur amount using the image captured by the second imaging system 106a. For this reason, the second imaging system 106a can be driven at high speed. That is, the second imaging system 106a can increase the number of times of capturing images per unit time.
- the blur amount deriving unit 104 derives a blur amount by comparing image information captured at different times (in time series) by the optical system without shifting the pixels, in the second imaging system 106a. It is. The details will be described later.
- the first imaging system 106 so as to correct this blur amount.
- the pixel shift amount of b is set, and the pixel shifted image is stored in the image memory 103.
- the image synthesizing unit 105 synthesizes images captured by the first imaging system 106b and the second imaging system 106a and stored in the image memory 103, and generates a high-resolution image.
- FIG. 2 is a flowchart showing the overall operation of the imaging apparatus according to the present embodiment.
- Shooting starts in response to the shooting start command in step 200.
- shooting pre-processing in step 201 is performed. This calculates the optimal exposure time and performs the focusing process.
- focusing may be performed by measuring the distance of the subject using a laser or radio wave.
- the optimal exposure time in consideration of ambient ambient light and the like.
- These include a method of detecting the brightness with an illuminance sensor and setting the exposure time, and a method of providing a preview function for capturing an image before starting shooting.
- the method of setting the preview function is to convert the image captured before the start of shooting to brightness information by gray scale. If the histogram is biased to white (bright), it is judged as overexposed (exposure time is too long), and if the histogram is biased to black (dark), it is underexposed (exposure time is too short). And adjust the exposure time.
- the time from the shooting start command to the start of exposure can be shortened.
- step 202 imaging by pixel shifting is performed. This photographing is performed by repeating each process from step 203 to step 208.
- Step 203 is an exposure process of the second imaging system 106a
- step 204 is a process of transferring an image captured by the second imaging system 106a to the image memory 103. Images taken at different times by the second imaging system 106a are transferred to the image memory 103.
- the images stored in the image memory 103 are compared to determine the amount of blur (the amount of blur of the imaging device).
- step 206 based on the pixel shift amount adjusted to reflect the blur amount obtained in step 205, the first imaging system 106b shifts the pixel and takes an image.
- Step 207 is an exposure process of the first imaging system 106b
- step 208 is a process of transferring an image photographed by the first imaging system 106b to the image memory 103.
- the blur amount derivation will be specifically described first. As described above, if images are taken at different times, the image may be blurred due to camera shake or subject shake during that time. In order to utilize the invalid portion of the pixel by pixel shifting, it is necessary to determine the amount of pixel shifting considering this blur.
- step 202 do not perform pixel shifting immediately before pixel shifting! Capture images taken at different times with the second imaging system 106a, calculate the blur amount, and reflect it in the pixel shift amount. I am letting.
- the camera shake amount of the imaging apparatus is obtained as described above.
- the specific method will be described.
- the subject appears and moves in the time-series images.
- the time interval is short, the shape of the subject does not change, and the position can be regarded as moving. For this reason, of two images with different shooting times, one is the comparison source image and the other is the comparison destination image, and it is examined to which part of the comparison destination image the predetermined area of the comparison source image has moved. Thus, it is possible to determine how the image has moved.
- comparison source region a specific region in the comparison source image (hereinafter referred to as “comparison source region”) corresponds to the comparison source image.
- evaluate source region an evaluation area of the same size as the area, and evaluate how similar the comparison source area and evaluation area are!
- evaluation areas are sequentially set at different positions, and the movement destination of the comparison source area is searched while performing the above-described evaluation in each evaluation area. In this case, the evaluation area most similar to the comparison source area becomes the movement destination of the comparison source area.
- the image captured by the image sensor can be regarded as a set of light intensities corresponding to each pixel, the light from the Xth pixel in the horizontal direction to the right and the yth pixel in the vertical direction from the top left is the origin. If the intensity is I (x, y), the image can be considered as a distribution of this light intensity I (x, y). it can.
- FIG. 3 shows the positional relationship between the comparison source region 301 and the evaluation region 302.
- the comparison source area is set to a rectangular shape where the upper left pixel position of the comparison source area is (xl, yl) and the lower right pixel position is (x2, y2). is doing.
- the evaluation area (m, n) moved by m pixels to the right and n pixels downward from the comparison source area the upper left pixel is (xl + m, yl + n) and the lower right position is It can be represented by a region of (x2 + m, y2 + n).
- the evaluation value R (m, n) shows a smaller value as the correlation between the light intensity distribution (image) of the comparison source region and the evaluation region becomes larger (similar).
- m and n need not be limited to integers, and the original light intensity I (X, y) force is also interpolated between the pixels I '(X, y ) And calculating the evaluation value R (m, n) from (Equation 1) based on I '(X, y) Can do.
- equation 1 a method of data interpolation, linear interpolation, nonlinear interpolation, or deviation method may be used!
- the values of m and n are changed, and an evaluation region whose evaluation value is most similar to the comparison source region is searched with subpixel accuracy.
- the blur direction of camera shake and subject blur is not limited to a specific direction, so the values of m and n need to be considered including negative values (evaluation of areas moved leftward or upward).
- m and n may be changed so that the entire range of the comparison target image can be evaluated, the image of the subject moves greatly due to camera shake or the like, and falls outside the light receiving range of the image sensor. Since it cannot be synthesized as an image, it is generally preferable to limit m and n to a predetermined range to shorten the calculation time.
- the combination of m and n where the evaluation value R (m, n) found in this way is the minimum value is the amount of blur indicating the position of the comparison target image area corresponding to the comparison source area.
- comparison source region need not be limited to a rectangle, and an arbitrary shape can be set.
- calculation of the evaluation value does not need to be limited to the sum of absolute values of the differences in light intensity.
- the evaluation value may be calculated.
- the comparison method using the correlation of the images can be used for obtaining the amount of parallax described later, and can also be used for calibration of the pixel shifting means. For example, by taking an image before and after pixel shifting by the pixel shifting means and evaluating the amount of shift of the image, the actuator used for pixel shifting is moving accurately due to the surrounding environment (temperature and deterioration over time). Can confirm. By such processing, pixel shifting by the actuator can be ensured.
- FIG. 4 is a diagram showing image movement due to camera shake in the present embodiment.
- This figure shows an example of shooting an image of a landscape with little subject movement!
- FIG. 4A is a diagram when the subject and the camera are moved in parallel, and
- FIG. 4C shows the change in the image between shooting times 1 and 2 in this case.
- FIG. 4B is a view when the camera is rotated in the horizontal direction, and
- FIG. 4C shows a change in the image between shooting times 1 and 2 in this case.
- the resolution is further improved by detecting and correcting image distortion due to this rotation.
- only calculating the amount of image blur for one specific evaluation area can only determine the parallel movement of the image, so multiple evaluation areas are set and the amount of blur at each location is calculated.
- the amount of camera shake and image distortion in each evaluation region can be obtained.
- FIG. 5 is a diagram for explaining adjustment of the pixel shift amount. This figure shows an enlarged part of a part of the image sensor, and shows the assumed pixel shift vector 400, the blur vector 401 detected by the blur derivation means, and the actual pixel shift vector 402. ing.
- an image in which the amount of deviation in the X direction and the ⁇ direction of the vector 401 is an integer pitch (integer multiple of one pixel pitch) can be regarded as the same as an image obtained by shifting pixel coordinates by an integer pixel. it can.
- shooting at shooting time 2 by the second imaging system 106a without pixel shifting is the same as shooting an image that was already shot at shooting time 1 with different pixels.
- the first pixel shift In the imaging system 106b as in the case where there is no camera shake at all, by shifting 0.5 pixels in the X direction as in the vector 400, the invalid part 405 on the right side of the photoelectric conversion unit 404 can be photographed, and the pixels are shifted. The effect of can be obtained.
- a new pixel shift vector is set so as to be the same as the shift amount of the partial force vector 400 equal to or smaller than the integer pitch in the hand shake, the pixel shift effect can be obtained.
- the portion of the blur vector 401 that is less than or equal to the integer pitch in the X direction is 0.25 pixels, and the portion that is less than or equal to the integer pitch in the Y direction is 0.5 pixels.
- a new pixel shift vector should be set so that the portion below the integer pitch in the X direction is 0.5 pixels and the partial force is less than the integer pitch in the Y direction!
- the pixel shift vector is set to 0.25 pixels in the X direction and 0.5 pixels in the Y direction as in the vector 402 in FIG.
- the positional relationship is the same as in the case of pixel shifting using the element shifting vector 400. That is, according to the present embodiment, since the pixel shift vector is adjusted in accordance with the blur vector, it is possible to always obtain the pixel shift effect.
- step 202 After a series of steps in step 202 is repeated until the set number of image shifts is completed, the images stored in the image memory are combined in step 209, and the images are output in step 210 to complete the shooting.
- a concrete example is shown below.
- FIG. 6 shows the configuration of the imaging optical system, the pixel shifting means, and the imaging device according to the first embodiment.
- an imaging optical system two aspherical lenses 601a and 601b having a diameter of 2.2 mm were used.
- the optical axis of the lens is almost parallel to the Z axis in Fig. 6, and the distance between them is 3 mm.
- a glass plate 602 is provided on the optical axis of the lens 601b.
- the glass plate 602 can be tilted with respect to the X-axis and Y-axis by a piezoelectric actuator and a tilt mechanism (not shown).
- the pixel is shifted by 1/2 (1.2 m) of the pixel pitch in the horizontal direction (X-axis direction) to double the number of pixels.
- the glass plate 602 has an optical glass with a width (X-axis direction) of 2 mm, a height (Y-axis direction) of 2 mm, and a thickness (Z-axis direction) of 500 ⁇ m. BK7 is used.
- a monochrome CCD 603 having a pitch of 2.4 m between adjacent pixels was used as the image sensor 603.
- the light receiving surfaces of the glass plate 602 and the image sensor 603 are substantially parallel to the XY plane in FIG. Further, the image sensor 603 is divided into two regions 603a and 6003b so as to correspond to each optical system on a one-to-one basis. By providing a readout circuit and a drive circuit for each of the regions 603a and 603b of the image sensor 603, the images of the regions 603a and 603b can be individually read out.
- a method of inclining the glass plate as the pixel shifting means is used.
- the method is not limited to this method.
- an image sensor or lens may be physically powered by a predetermined amount using an actuator using a piezoelectric element, an electromagnetic actuator, or the like. Even when other means are used as the pixel shifting means in this way, the configuration shown in FIG. 6 is the same except for the glass plate 602.
- one image sensor is divided into two different regions, but an image sensor that uses two different image sensors so as to correspond one-to-one with each optical system may be used.
- This form may be any form as long as the plurality of imaging areas correspond one-to-one with each optical system.
- FIG. 7 shows a configuration of the imaging apparatus according to the second embodiment.
- the main difference from the first embodiment is that the second embodiment is that a parallax amount deriving means 700 is added, and the imaging element 70 1 is a body, which is substantially the same time as the first imaging system.
- the second imaging system is photographed, and optimum image selection means 702 for selecting an image to be combined based on the parallax amount and the blur amount is added. Explanation of overlapping parts with the first embodiment is omitted.
- FIG. 8 shows a flowchart of the overall operation of the imaging apparatus according to the present embodiment.
- the imaging start command in step 200 and the imaging pre-processing in step 201 are the same as in the first embodiment.
- step 800 photographing by pixel shifting is performed.
- Step 800 is the same as Step 801
- the exposure process of the image sensor, the transfer process of the image of the image sensor in step 802 to the image memory 103, and the pixel shift process in step 803 are repeated.
- the image sensor 701 is configured to be shared by the first image pickup system 106b and the second image pickup system 106a, images are taken at almost the same timing.
- the pixel shift amount is a fixed value regardless of the amount of camera shake, and is set for pixels that are set so that invalid pixels can be used effectively when there is no camera shake (for example, 0.5 pixels).
- step 800 is a step of taking in an image and deriving a blur amount in the second imaging system 106a in order to adjust the pixel shift amount ( Step 205) in FIG. 2 is omitted. For this reason, the interval between shooting time 1 for shooting without pixel shifting and shooting time 2 for shooting with pixel shifting can be shortened. As a result, camera shake is reduced, and shooting can be performed even when the subject moves faster than in the first embodiment.
- step 803 After the pixel-shift imaging in step 803 is completed, the images stored in time series among the images stored in the image memory 103 in step 804 are processed in the same manner as in step 205 in the first embodiment. Compare and find the amount of blur. If there is a movement of the subject, the amount of blur will not be uniform in the image, and if the amount of blur is determined and superimposed together, it will not overlap exactly and resolution will not improve depending on the location.
- the resolution of the entire image can be improved. For this division, it is not necessary to limit to the rectangle, separately detect the subject, and divide each subject to detect the amount of blur.
- step 805 images taken by imaging systems with different optical axes at the same time are compared to determine the amount of parallax.
- the relative position of the subject image formed on the image sensor changes according to the distance of the subject that the image forming position is not only the distance between the centers of the lenses.
- FIG. 9 is a diagram for explaining parallax.
- two imaging optical systems 1301a and 130 lb having the same characteristics are installed at a distance D, and the imaging surfaces of the imaging optical systems are denoted by 1302a and 1302b, respectively.
- the imaging optical systems 1301a and 1301b observe the same subject from different positions. For this reason, parallax occurs between images formed on the imaging surfaces 1302a and 1302b.
- the parallax amount ⁇ is given by (Equation 2) below.
- D is the distance between the optical axis of the imaging optical system 1301a and the optical axis of the imaging optical system 1301b
- f is the focal length of the imaging optical systems 1301a and 1301b
- A is the distance between the subject and the imaging planes 1302a and 1302b.
- the parallax amount ⁇ can be expressed as D'fZA, and ⁇ can be considered as 0.
- the images taken by the imaging optical systems 1301a and 130 lb can be regarded as the same. Therefore, if the distance D between the lens centers is corrected, the composition process can be performed as it is.
- the images captured by the imaging optical system 1301a and the imaging optical system 1301b are images that have a shift due to parallax depending on the distance of the subject and cannot be regarded as the same. Therefore, it cannot be combined as it is.
- the lens center distance D may be calculated from the lens distance distance, a subject that is a marker is placed at infinity, and the position where the image is formed is regarded as the center of the lens. It may be calculated.
- the method of dividing the block is not limited to this method, and the block may be divided by changing the number of pixels or the shape.
- the direction in which the parallax occurs is the origin of the image sensor (image sensor And the intersection of the corresponding optical system with the optical axis of each optical system)
- the combination of m and n in (Equation 1) is limited according to the direction when detecting the visual difference. Just do it.
- step 806 an image is selected that has a combination that improves resolution when combined based on the amount of blur and the amount of parallax. As described above, the resolution is improved by shifting the pixels as long as the pixels to be overlapped are shifted so as to use the invalid portion. It can be used as well.
- FIG. 10 is an explanatory diagram of a method for selecting an optimum image.
- the shaded area in this figure is a subject image formed on the image sensor.
- subject images 1001a and 1001b are formed in the imaging region 1000a of the second imaging system and the imaging region 1000b of the first imaging system. It is assumed that the subject is on the center line of the second imaging system.
- the subject image 1001b is formed at a position shifted by ⁇ on the imaging region 1000b.
- the image of each imaging area is transferred to the image memory 103, it is stored as two-dimensional data.
- the upper left coordinate of the subject image 1001a is (ax, ay)
- the upper left coordinate of the subject image 1001b is parallax ⁇ Since it is shifted by the amount, (ax + ⁇ , ay) is obtained.
- the second imaging area is 1002a
- the first imaging area is 1002b
- the subject images at that time are 1003a and 1003b.
- the first imaging system was moved 0.5 pixels to the right by pixel shifting means.
- the subject image 1003a on the imaging region 1002a is imaged at a position shifted (bx, by) from the origin.
- FIG. 11 is another explanatory diagram of the optimal image selection method.
- the amount of deviation bx and the amount of parallax ⁇ can be divided into a case where it is close to an integer pitch and a case where it is close to a value obtained by adding a 0.5 pixel pitch to an integer pitch.
- bx and ⁇ indicate values equal to or smaller than the integer pitch.
- Each value in FIG. 11 is obtained by calculating the value of each X coordinate of the subject with 0 as the X coordinate value ax of the reference imaging region 1000a.
- 0 indicates that the positional relationship between the pixel of the imaging element that converts the subject into an image and the subject is shifted by an integer pitch compared to the case of the reference imaging region 1000a.
- 0.5 indicates that the pixel pitch is shifted by 0.5.
- the image corresponding to the portion indicated by 0.5 is an image that can effectively use the invalid portion.
- the calculated value of the X coordinate is 0.5 for the four images, regardless of the combination of the amount of parallax ⁇ and the amount of camera shake bx. There is an image. Therefore, in any combination, it is possible to obtain an image that effectively uses the invalid portion. That is, the resolution can be improved regardless of camera shake or the distance of the subject.
- the part where the values of bx and ⁇ in FIG. 11 are 0.5 may be a value close to 0.5 (for example, a value from 0.3 to 0.7).
- the part set to 0 may be a value close to 0 (for example, a value less than 0.3 or a value greater than 0.7).
- image data needs to be arranged on a grid. So, when you overlay and combine images, you can do linear interpolation processing! ⁇ .
- the pixel pitch in the oblique direction may be used as a reference when the optimum image is selected based on the horizontal pixel pitch of the image sensor. Also, pixel pitch standards can be mixed depending on the situation.
- Example 2 according to Embodiment 2 will be described.
- Configuration on appearance of Example 2 6 has the same configuration as that of FIG. 6 of the first embodiment, and the optical system and the pixel shifting mechanism of the second embodiment are also the same as those of the first embodiment.
- the second embodiment is different in that the image sensor 603 exposes at approximately the same time and transfers an image, and the driving amount of the pixel shifting mechanism is fixed.
- an optical glass BK7 (602 in the figure) with a thickness of 500 m is provided on the optical axis of the lens 601b, and tilted by about 0.4 degrees using a piezoelectric actuator and tilting mechanism.
- the subject image is shifted in the horizontal direction (X-axis direction) by a pixel pitch of 1Z2 (1.2 / zm) to double the number of pixels.
- First image capturing time 1 time taken of the time taken to the second image after the pixel shift (after inclining the glass plate) and shooting time 2.
- the size of the area to be compared need not be limited to a square, and may be set arbitrarily.
- the parallax amount was obtained from the captured image 701 and the captured image 702 at the imaging time 1 by the parallax amount deriving means.
- the optimum image selection means selects an image to be synthesized.
- the method of tilting the glass plate is used as the pixel shifting means.
- the force is not limited to this method.
- an image sensor or lens may be physically powered by a predetermined amount using an actuator using a piezoelectric element, an electromagnetic actuator, or the like.
- one image sensor is divided into two different areas.
- two different image sensors may be used so as to correspond to each optical system in a one-to-one relationship.
- the form of the image sensor may be any form as long as the plurality of image areas correspond to each optical system on a one-to-one basis.
- the present embodiment is different from the second embodiment in that there is a moving amount of a subject to be imaged (for example, a person or an animal).
- a subject to be imaged for example, a person or an animal.
- the subject is moved to another location while the first image is shot, the data is stored in the memory and the second shot is taken, and the first image is captured.
- the scene where a part of the subject moves to another location is shot.
- block dividing means for dividing an image into a plurality of blocks is provided, and the amount of blur is derived for each block.
- the block dividing means is controlled by the system control means 100 and divides the entire first image taken by the second imaging system 106a without pixel shifting into 10 ⁇ 10 pixel blocks.
- the blur amount deriving unit 104 checks, for each block, which position in the second image each of the divided images corresponds to. (Equation 1) was used to derive the amount of image movement.
- FIG. 13 shows images taken in time series by the second imaging system 106a without shifting the pixels stored in the image memory according to the present embodiment.
- FIG. 13A shows an image taken at shooting time 1
- FIG. 13B shows an image taken at shooting time 2.
- Fig. 13C shows the amount of image movement derived for each block.
- the block indicated by A is a block from which a 10.1 pixel blur is derived on the right in FIG. 13A
- the block indicated by B is 8.8 pixels on the left in FIG. 13A. This is the block from which the blur is derived.
- the amount of camera shake and the movement of the subject are integrated.
- the parallax can be divided into blocks and obtained for each block. From the sum of the blur amount and the parallax amount, as in the case of the second embodiment, the one having an integer pitch (or close to the integer pitch) and the 0.5 pixel pitch (or 0.5 pixel pitch). By selecting images with a layout close to (2), it is possible to select an image with improved resolution when combined.
- the resolution of the entire image can be improved even when the movement of the subject is large.
- the pixel shifting technique is a technique for improving the resolution, there is no effect on a smooth surface of the subject to be photographed or a fine pattern below the resolution performance of the lens.
- shifting pixels by reducing the time taken between shots, camera shake and subject shake are reduced, and resolution is improved.
- the shooting interval can be shortened by stopping the processing for that block.
- high-frequency components can be seen in high-resolution parts when Fourier transform is performed. Therefore, after the image is captured and divided into blocks, the frequency components of the image are analyzed, and if it is below a predetermined condition, the derivation of the blur amount and the parallax calculation for that portion may be stopped.
- the present embodiment is different from the third embodiment in that the present embodiment uses subject discrimination means for discriminating different subjects in the image.
- subject discrimination means By using the subject discrimination means, it is easy to derive the blur amount for each subject. For this reason, the amount of blur can be accurately derived even when the amount of blur is different in an image where there is subject blur in addition to camera shake.
- the blocks can be divided for each subject or the block size can be changed for each subject. It is also possible to selectively synthesize only a specific subject when synthesizing images.
- the subject discrimination means a means for measuring the distance to the subject using radio waves or the like to identify different image areas, a means for discriminating different subjects by performing edge detection or the like by image processing, and a parallax amount are used. There is a method of extracting a subject from an image. In addition, the specific means is not limited as long as different subjects in the image can be distinguished.
- the basic configuration of the present embodiment is the same as that of the second embodiment, and therefore overlapped. The description is omitted here.
- Fig. 14 is a diagram showing an image photographed in the present example and a subject group discriminated by the subject discriminating means.
- the captured image was divided into 10 ⁇ 10 pixel blocks (horizontal 11 ⁇ vertical 9), and the distance to the subject was measured by radio waves for each block, and different subjects were identified.
- subject discrimination those within a certain error range in distance measurement were discriminated as the same subject. In this example, the error range was 5%.
- Fig. 14A is an image taken without pixel shifting with the second imaging system 106a at shooting time 1
- Fig. 14B is an image shot with no pixel shifting with the second imaging system 106a at shooting time 2.
- the distance (unit: meters) measured by radio waves for each block is shown.
- the distance A may be calculated for each block by the above (Equation 1) using the parallax ⁇ obtained for each block.
- the distance of the subject was measured by radio waves. As shown in Fig. 14A, two groups of subjects could be identified. One is subject group 1 at a distance of approximately 5 meters, and the other is subject group 2 at a distance of approximately 2 meters. Each subject group is identified by the distance within the 5% error range.
- the blur correction of subject group 1 is performed on the image taken at shooting time 2! / And the image composition is performed.
- the same method as in Example 2 was used as the method for selecting an image by the optimum image selection means.
- a value that is less than or equal to an integer pitch among the 10.3 pixel pitch blur of subject group 1 is 0.3 pixels, and bx in FIG. it can.
- the value of bx can be set to a negative value of 0.5. In this case, it becomes 0.5 force 5 in Table 11.
- the positive / negative difference of bx is a difference in whether the position of the invalid pixel to be effectively used is the force left which is the right of the photoelectric conversion unit, and the contribution to the resolution is the same.
- the blur amount can be derived for each subject, so that the blur amount of the image can be corrected accurately.
- the image may partially protrude from the shooting range due to camera shake and subject blur.
- V If the image cannot be recognized! /, Increase the resolution by shifting the pixel to the image area. Instead, you only need to select one of the images you have taken.
- FIG. 15 shows configurations of the imaging system, the pixel shifting means, and the imaging device according to the present embodiment.
- an aspherical lens 1101a with a diameter of 2 mm: LlOld was used as the imaging optical system.
- the optical axis of the lens is almost parallel to the Z axis in Fig. 15, and the distance is 2.5 mm.
- Color filters 1102a to 1102d are provided in front of each lens (subject side) as wavelength separation means that transmits only a specific wavelength.
- 1102a and 1102d are color filter letters that transmit green
- 1102b is a color filter that transmits red
- 1102c is a color filter that transmits blue.
- 1103a to 1103d are four image sensors corresponding to each lens on a one-to-one basis, using a common drive circuit and operating in synchronization.
- a color image can be obtained by combining images taken by each optical system (color component).
- the pixel pitch of the image sensor is 3 ⁇ m in this example.
- each lens and the image sensor are installed in parallel with the X axis in FIG. 15 and at equal intervals, and the light receiving surface of each image sensor is substantially parallel to the XY plane in FIG. .
- Reference numeral 1104 denotes a piezoelectric fine movement mechanism serving as a pixel shifting means.
- the image sensors 1103a to 103c are attached to the piezoelectric fine movement mechanism 1104, and the X direction in the figure. Enabled to drive in Y direction.
- the 1103d is independent of the piezoelectric fine movement mechanism, and does not perform pixel alignment. It becomes the second imaging system.
- FIG. 16 is a plan view of the piezoelectric fine movement mechanism 1104.
- Image sensors 1103a to 1103c are installed on the stage 1201 at the center.
- the stage 1201 is finely moved in the X-axis direction in the figure by the laminated piezoelectric elements 1202a and 1202b, and the stage fixing frame 1202 is finely moved in the Y-axis direction in the figure by the laminated piezoelectric elements 1203a to 1203d.
- the image sensor can be finely moved independently in two axial directions perpendicular to each other in the horizontal plane of the image sensor.
- each image sensor By capturing the first image, four images corresponding to the four image sensors 1103a to 1103d are obtained.
- Each of the three image sensors 1103 & 1103 was configured to shoot while moving by 0.5 pixel pitch (1.5 m) in the direction and Y direction. Specifically, take the first picture without shifting the pixels, move the picture by 0.5 pixels in the X direction, take the second picture, and then keep the position in the X direction. The third shot was taken with a 0.5 pixel pitch shift in the Y direction, and finally the fourth shot was taken with a power of 0.5 pixel pitch in the X direction while maintaining the position in the Y direction. By combining these four images, a high-resolution image was obtained.
- the amount of blur at each shooting time was derived for each of a plurality of images taken in time series using the lens l lOld of the second imaging system without pixel shifting.
- the parallax amount is derived from the first image captured by the first imaging system with the green power filter 1102a and the second imaging system with the green color filter 1102d. Asked. This is because the amount of parallax that makes it easy to compare the direction images of images taken using the same color filter can be obtained more precisely.
- FIG. 17 shows another example of the arrangement of the four optical systems.
- FIG. 17A is an example in which four optical systems are arranged at the vertices of a rectangle. GO, G1 are green, R is red, and B is blue, indicating the wavelength separation means (color filter).
- FIG. 17B is a diagram for explaining the derivation of the amount of parallax in the arrangement of FIG. 17A.
- a green imaging system arranged on a diagonal line is used for deriving the amount of parallax.
- the parallax of the other red and blue imaging systems is an orthogonal component of the parallax amount in the green imaging system because the four optical systems are arranged in a rectangular rectangle.
- a color filter is provided in front of the lens for wavelength separation.
- a color filter is provided between the lens and the image sensor, or a color filter is directly formed on the lens. May be.
- the color filter is not necessarily limited to the three primary colors R, G, and B.
- the complementary color filter is used to separate the wavelengths and invert the color information by image processing. ⁇ .
- the wavelength separation means is not limited to a color filter.
- a mechanism for tilting using a glass plate is used as the pixel shifting means, colored glass can be used as the glass plate.
- any specific means may be used as the wavelength separation means as long as it is a means for separating only a predetermined wavelength component.
- FIG. 18 is a flowchart illustrating the overall operation of the imaging apparatus according to the third embodiment.
- the pixel shift operation method is first determined and a predetermined number of times of shooting is performed.
- Embodiment 3 has a configuration in which the number of shots varies depending on the shot image.
- Fig. 18 [Steps 1500, 1501, 1503, 1504 ⁇ ] are the same as steps 200, 201, 801, 802 in Fig. 8. The configuration in FIG. 18 is different from the configuration in FIG. 8 in the subsequent configuration.
- the amount of blur is obtained in step 1505, and the image to be synthesized is selected in step 1506.
- step 1507 After selecting an image in step 1506, in step 1507, an image lacking in composition is found, and the amount of deviation is determined so that the image can be obtained.
- step 1508 pixel shifting is executed. To do.
- Step 1502 is repeated by repeating a series of steps in step 1502 until an image necessary for composition is obtained. Thereafter, the amount of parallax is derived in step 1509, the images stored in the image memory are combined in step 1510, the image is output in step 1511, and the shooting is completed.
- the present invention is useful for an imaging device in, for example, a digital still camera or a mobile phone.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006519018A JP4699995B2 (ja) | 2004-12-16 | 2005-12-12 | 複眼撮像装置及び撮像方法 |
US10/597,794 US7986343B2 (en) | 2004-12-16 | 2005-12-12 | Multi-eye imaging apparatus |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-363868 | 2004-12-16 | ||
JP2004363868 | 2004-12-16 | ||
JP2005-154447 | 2005-05-26 | ||
JP2005154447 | 2005-05-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006064751A1 true WO2006064751A1 (ja) | 2006-06-22 |
Family
ID=36587806
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/022751 WO2006064751A1 (ja) | 2004-12-16 | 2005-12-12 | 複眼撮像装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US7986343B2 (ja) |
JP (1) | JP4699995B2 (ja) |
WO (1) | WO2006064751A1 (ja) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008135847A (ja) * | 2006-11-27 | 2008-06-12 | Funai Electric Co Ltd | 動き検出撮像装置 |
JP2009049900A (ja) * | 2007-08-22 | 2009-03-05 | Hoya Corp | 撮像装置 |
WO2010013733A1 (ja) * | 2008-07-31 | 2010-02-04 | 富士フイルム株式会社 | 複眼撮像装置 |
WO2010055643A1 (ja) * | 2008-11-12 | 2010-05-20 | シャープ株式会社 | 撮像装置 |
JP2012070389A (ja) * | 2010-03-19 | 2012-04-05 | Fujifilm Corp | 撮像装置、方法およびプログラム |
WO2015087599A1 (ja) * | 2013-12-09 | 2015-06-18 | ソニー株式会社 | 撮像ユニット、レンズ鏡筒および携帯端末 |
WO2015182447A1 (ja) * | 2014-05-28 | 2015-12-03 | コニカミノルタ株式会社 | 撮像装置および測色方法 |
WO2017094535A1 (en) | 2015-12-01 | 2017-06-08 | Sony Corporation | Surgery control apparatus, surgery control method, program, and surgery system |
JP2017220745A (ja) * | 2016-06-06 | 2017-12-14 | オリンパス株式会社 | 撮像装置 |
WO2021182066A1 (ja) * | 2020-03-11 | 2021-09-16 | ソニー・オリンパスメディカルソリューションズ株式会社 | 医療用画像処理装置および医療用観察システム |
Families Citing this family (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8156116B2 (en) | 2006-07-31 | 2012-04-10 | Ricoh Co., Ltd | Dynamic presentation of targeted information in a mixed media reality recognition system |
US7702673B2 (en) | 2004-10-01 | 2010-04-20 | Ricoh Co., Ltd. | System and methods for creation and use of a mixed media environment |
US10192279B1 (en) | 2007-07-11 | 2019-01-29 | Ricoh Co., Ltd. | Indexed document modification sharing with mixed media reality |
WO2006122320A2 (en) * | 2005-05-12 | 2006-11-16 | Tenebraex Corporation | Improved methods of creating a virtual window |
US9063952B2 (en) * | 2006-07-31 | 2015-06-23 | Ricoh Co., Ltd. | Mixed media reality recognition with image tracking |
US8446509B2 (en) * | 2006-08-09 | 2013-05-21 | Tenebraex Corporation | Methods of creating a virtual window |
WO2009001467A1 (ja) * | 2007-06-28 | 2008-12-31 | Fujitsu Limited | 低照度環境における撮影画像の明るさを改善する電子機器 |
US20090051790A1 (en) * | 2007-08-21 | 2009-02-26 | Micron Technology, Inc. | De-parallax methods and apparatuses for lateral sensor arrays |
US20090079842A1 (en) * | 2007-09-26 | 2009-03-26 | Honeywell International, Inc. | System and method for image processing |
US8791984B2 (en) * | 2007-11-16 | 2014-07-29 | Scallop Imaging, Llc | Digital security camera |
US20090290033A1 (en) * | 2007-11-16 | 2009-11-26 | Tenebraex Corporation | Systems and methods of creating a virtual window |
US8564640B2 (en) * | 2007-11-16 | 2013-10-22 | Tenebraex Corporation | Systems and methods of creating a virtual window |
EP2175632A1 (en) * | 2008-10-10 | 2010-04-14 | Samsung Electronics Co., Ltd. | Image processing apparatus and method |
EP2341388A1 (en) * | 2008-10-17 | 2011-07-06 | Hoya Corporation | Visual field image display device for eyeglasses and method for displaying visual field image for eyeglasses |
WO2010103527A2 (en) | 2009-03-13 | 2010-09-16 | Ramot At Tel-Aviv University Ltd. | Imaging system and method for imaging objects with reduced image blur |
US20100328456A1 (en) * | 2009-06-30 | 2010-12-30 | Nokia Corporation | Lenslet camera parallax correction using distance information |
US20120120282A1 (en) * | 2009-08-14 | 2012-05-17 | Goris Andrew C | Reducing Temporal Aliasing |
WO2011037964A1 (en) * | 2009-09-22 | 2011-03-31 | Tenebraex Corporation | Systems and methods for correcting images in a multi-sensor system |
US8390724B2 (en) * | 2009-11-05 | 2013-03-05 | Panasonic Corporation | Image capturing device and network camera system |
CN102118556B (zh) * | 2009-12-31 | 2012-12-19 | 敦南科技股份有限公司 | 图像感测装置即时调整图像撷取频率的方法 |
JP2011257541A (ja) * | 2010-06-08 | 2011-12-22 | Fujifilm Corp | 立体カメラ用レンズシステム |
US8717467B2 (en) * | 2011-01-25 | 2014-05-06 | Aptina Imaging Corporation | Imaging systems with array cameras for depth sensing |
JP5956808B2 (ja) | 2011-05-09 | 2016-07-27 | キヤノン株式会社 | 画像処理装置およびその方法 |
US9058331B2 (en) | 2011-07-27 | 2015-06-16 | Ricoh Co., Ltd. | Generating a conversation in a social network based on visual search results |
JP5762211B2 (ja) * | 2011-08-11 | 2015-08-12 | キヤノン株式会社 | 画像処理装置および画像処理方法、プログラム |
JP2013055381A (ja) * | 2011-08-31 | 2013-03-21 | Ricoh Co Ltd | 撮像装置及び撮像方法並びに携帯情報端末装置 |
JP5917054B2 (ja) * | 2011-09-12 | 2016-05-11 | キヤノン株式会社 | 撮像装置、画像データ処理方法、およびプログラム |
JP2013183353A (ja) * | 2012-03-02 | 2013-09-12 | Toshiba Corp | 画像処理装置 |
US9253433B2 (en) | 2012-11-27 | 2016-02-02 | International Business Machines Corporation | Method and apparatus for tagging media with identity of creator or scene |
US9565416B1 (en) | 2013-09-30 | 2017-02-07 | Google Inc. | Depth-assisted focus in multi-camera systems |
US9426365B2 (en) * | 2013-11-01 | 2016-08-23 | The Lightco Inc. | Image stabilization related methods and apparatus |
US9154697B2 (en) | 2013-12-06 | 2015-10-06 | Google Inc. | Camera selection based on occlusion of field of view |
KR20160068407A (ko) * | 2014-12-05 | 2016-06-15 | 삼성전기주식회사 | 촬영 장치 및 촬영 장치의 제어 방법 |
DE102015215840B4 (de) * | 2015-08-19 | 2017-03-23 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Multiaperturabbildungsvorrichtung, Abbildungssystem und Verfahren zum Bereitstellen einer Multiaperturabbildungsvorrichtung |
JP2020096301A (ja) * | 2018-12-13 | 2020-06-18 | オリンパス株式会社 | 撮像装置 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001012927A (ja) * | 1999-06-29 | 2001-01-19 | Fuji Photo Film Co Ltd | 視差画像入力装置及び撮像装置 |
JP2002204462A (ja) * | 2000-10-25 | 2002-07-19 | Canon Inc | 撮像装置及びその制御方法及び制御プログラム及び記憶媒体 |
JP2005176040A (ja) * | 2003-12-12 | 2005-06-30 | Canon Inc | 撮像装置 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06261236A (ja) | 1993-03-05 | 1994-09-16 | Sony Corp | 撮像装置 |
US6429895B1 (en) | 1996-12-27 | 2002-08-06 | Canon Kabushiki Kaisha | Image sensing apparatus and method capable of merging function for obtaining high-precision image by synthesizing images and image stabilization function |
JP3530696B2 (ja) | 1996-12-27 | 2004-05-24 | キヤノン株式会社 | 撮像装置 |
JPH10341367A (ja) | 1997-06-06 | 1998-12-22 | Toshiba Corp | 静止画像生成方法及び静止画像取り込みシステム |
JPH11225284A (ja) | 1998-02-04 | 1999-08-17 | Ricoh Co Ltd | 画像入力装置 |
US6611289B1 (en) * | 1999-01-15 | 2003-08-26 | Yanbin Yu | Digital cameras using multiple sensors with multiple lenses |
JP2000350123A (ja) * | 1999-06-04 | 2000-12-15 | Fuji Photo Film Co Ltd | 画像選択装置、カメラ、画像選択方法及び記録媒体 |
US7262799B2 (en) | 2000-10-25 | 2007-08-28 | Canon Kabushiki Kaisha | Image sensing apparatus and its control method, control program, and storage medium |
US7286168B2 (en) * | 2001-10-12 | 2007-10-23 | Canon Kabushiki Kaisha | Image processing apparatus and method for adding blur to an image |
JP3866957B2 (ja) | 2001-10-23 | 2007-01-10 | オリンパス株式会社 | 画像合成装置 |
JP2004048644A (ja) | 2002-05-21 | 2004-02-12 | Sony Corp | 情報処理装置、情報処理システム、及び対話者表示方法 |
JP4191639B2 (ja) | 2003-03-28 | 2008-12-03 | 川崎 光洋 | 平面画像の5感情報の立体化画像情報関連物 |
US7162151B2 (en) * | 2003-08-08 | 2007-01-09 | Olympus Corporation | Camera |
JP4164424B2 (ja) * | 2003-08-29 | 2008-10-15 | キヤノン株式会社 | 撮像装置及び方法 |
US7123298B2 (en) * | 2003-12-18 | 2006-10-17 | Avago Technologies Sensor Ip Pte. Ltd. | Color image sensor with imaging elements imaging on respective regions of sensor elements |
US7420592B2 (en) * | 2004-06-17 | 2008-09-02 | The Boeing Company | Image shifting apparatus for enhanced image resolution |
JP2006140971A (ja) * | 2004-11-15 | 2006-06-01 | Canon Inc | 画像処理装置及び画像処理方法 |
JP4401949B2 (ja) * | 2004-11-26 | 2010-01-20 | キヤノン株式会社 | 動画撮像装置及び動画撮像方法 |
JP4378272B2 (ja) * | 2004-12-15 | 2009-12-02 | キヤノン株式会社 | 撮影装置 |
-
2005
- 2005-12-12 WO PCT/JP2005/022751 patent/WO2006064751A1/ja not_active Application Discontinuation
- 2005-12-12 US US10/597,794 patent/US7986343B2/en not_active Expired - Fee Related
- 2005-12-12 JP JP2006519018A patent/JP4699995B2/ja not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001012927A (ja) * | 1999-06-29 | 2001-01-19 | Fuji Photo Film Co Ltd | 視差画像入力装置及び撮像装置 |
JP2002204462A (ja) * | 2000-10-25 | 2002-07-19 | Canon Inc | 撮像装置及びその制御方法及び制御プログラム及び記憶媒体 |
JP2005176040A (ja) * | 2003-12-12 | 2005-06-30 | Canon Inc | 撮像装置 |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008135847A (ja) * | 2006-11-27 | 2008-06-12 | Funai Electric Co Ltd | 動き検出撮像装置 |
JP2009049900A (ja) * | 2007-08-22 | 2009-03-05 | Hoya Corp | 撮像装置 |
WO2010013733A1 (ja) * | 2008-07-31 | 2010-02-04 | 富士フイルム株式会社 | 複眼撮像装置 |
JP2010032969A (ja) * | 2008-07-31 | 2010-02-12 | Fujifilm Corp | 複眼撮像装置 |
US8169489B2 (en) | 2008-07-31 | 2012-05-01 | Fujifilm Corporation | Multi view imaging device and method for correcting image blur |
WO2010055643A1 (ja) * | 2008-11-12 | 2010-05-20 | シャープ株式会社 | 撮像装置 |
JP2010118818A (ja) * | 2008-11-12 | 2010-05-27 | Sharp Corp | 撮像装置 |
JP2012070389A (ja) * | 2010-03-19 | 2012-04-05 | Fujifilm Corp | 撮像装置、方法およびプログラム |
WO2015087599A1 (ja) * | 2013-12-09 | 2015-06-18 | ソニー株式会社 | 撮像ユニット、レンズ鏡筒および携帯端末 |
US10050071B2 (en) | 2013-12-09 | 2018-08-14 | Sony Semiconductor Solutions Corporation | Imaging unit, lens barrel, and portable terminal |
WO2015182447A1 (ja) * | 2014-05-28 | 2015-12-03 | コニカミノルタ株式会社 | 撮像装置および測色方法 |
JP5896090B1 (ja) * | 2014-05-28 | 2016-03-30 | コニカミノルタ株式会社 | 撮像装置および測色方法 |
WO2017094535A1 (en) | 2015-12-01 | 2017-06-08 | Sony Corporation | Surgery control apparatus, surgery control method, program, and surgery system |
US11127116B2 (en) | 2015-12-01 | 2021-09-21 | Sony Corporation | Surgery control apparatus, surgery control method, program, and surgery system |
JP2017220745A (ja) * | 2016-06-06 | 2017-12-14 | オリンパス株式会社 | 撮像装置 |
WO2021182066A1 (ja) * | 2020-03-11 | 2021-09-16 | ソニー・オリンパスメディカルソリューションズ株式会社 | 医療用画像処理装置および医療用観察システム |
Also Published As
Publication number | Publication date |
---|---|
JP4699995B2 (ja) | 2011-06-15 |
US20070159535A1 (en) | 2007-07-12 |
JPWO2006064751A1 (ja) | 2008-06-12 |
US7986343B2 (en) | 2011-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4699995B2 (ja) | 複眼撮像装置及び撮像方法 | |
JP5066851B2 (ja) | 撮像装置 | |
EP2518995B1 (en) | Multocular image pickup apparatus and multocular image pickup method | |
US9025060B2 (en) | Solid-state image sensor having a shielding unit for shielding some of photo-electric converters and image capturing apparatus including the solid-state image sensor | |
JP5012495B2 (ja) | 撮像素子、焦点検出装置、焦点調節装置および撮像装置 | |
EP1841207B1 (en) | Imaging device, imaging method, and imaging device design method | |
US8885026B2 (en) | Imaging device and imaging method | |
JP5901246B2 (ja) | 撮像装置 | |
KR101156991B1 (ko) | 촬상 장치, 화상 처리 방법 및 기록 매체 | |
EP2160018B1 (en) | Image pickup apparatus | |
US9282312B2 (en) | Single-eye stereoscopic imaging device, correction method thereof, and recording medium thereof | |
CN103688536B (zh) | 图像处理装置、图像处理方法 | |
JPH08116490A (ja) | 画像処理装置 | |
JP6906947B2 (ja) | 画像処理装置、撮像装置、画像処理方法およびコンピュータのプログラム | |
JP2009141390A (ja) | 撮像素子および撮像装置 | |
JP2012049773A (ja) | 撮像装置および方法、並びにプログラム | |
JP2010213083A (ja) | 撮像装置および方法 | |
JP5348258B2 (ja) | 撮像装置 | |
JP2010130628A (ja) | 撮像装置、映像合成装置および映像合成方法 | |
JP5378283B2 (ja) | 撮像装置およびその制御方法 | |
JP2006135823A (ja) | 画像処理装置、撮像装置および画像処理プログラム | |
CN100477739C (zh) | 复眼摄像装置 | |
JP2008053787A (ja) | 多眼電子カメラ及び多眼電子カメラの視差補正方法 | |
JP2006135501A (ja) | 撮像装置 | |
JP2004007213A (ja) | ディジタル3次元モデル撮像機器 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2006519018 Country of ref document: JP |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2007159535 Country of ref document: US Ref document number: 10597794 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580005071.X Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 10597794 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 05814288 Country of ref document: EP Kind code of ref document: A1 |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 5814288 Country of ref document: EP |