WO2014155812A1 - 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム - Google Patents
画像処理装置、撮像装置、画像処理方法及び画像処理プログラム Download PDFInfo
- Publication number
- WO2014155812A1 WO2014155812A1 PCT/JP2013/080322 JP2013080322W WO2014155812A1 WO 2014155812 A1 WO2014155812 A1 WO 2014155812A1 JP 2013080322 W JP2013080322 W JP 2013080322W WO 2014155812 A1 WO2014155812 A1 WO 2014155812A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- display
- unit
- gradation
- display image
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 86
- 238000003672 processing method Methods 0.000 title claims abstract description 8
- 238000012937 correction Methods 0.000 claims abstract description 130
- 238000003384 imaging method Methods 0.000 claims description 88
- 238000012790 confirmation Methods 0.000 claims description 28
- 238000003860 storage Methods 0.000 claims description 25
- 210000001747 pupil Anatomy 0.000 claims description 23
- 230000007423 decrease Effects 0.000 claims description 5
- 238000004519 manufacturing process Methods 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 139
- 238000006243 chemical reaction Methods 0.000 description 74
- 238000000034 method Methods 0.000 description 56
- 238000009826 distribution Methods 0.000 description 43
- 230000008569 process Effects 0.000 description 43
- 238000001514 detection method Methods 0.000 description 38
- 238000004891 communication Methods 0.000 description 26
- 238000010586 diagram Methods 0.000 description 25
- 230000003287 optical effect Effects 0.000 description 19
- 238000004458 analytical method Methods 0.000 description 9
- 238000009795 derivation Methods 0.000 description 7
- 238000003825 pressing Methods 0.000 description 7
- 239000004973 liquid crystal related substance Substances 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 230000011514 reflex Effects 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000010295 mobile communication Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000010363 phase shift Effects 0.000 description 3
- 238000002360 preparation method Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 230000005674 electromagnetic induction Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
- G02B7/36—Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/673—Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
- G02B7/36—Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
- G02B7/365—Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals by analysis of the spatial frequency components of the image
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
- G03B13/34—Power focusing
- G03B13/36—Autofocus systems
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B17/00—Details of cameras or camera bodies; Accessories therefor
- G03B17/18—Signals indicating condition of a camera member or suitability of light
- G03B17/20—Signals indicating condition of a camera member or suitability of light visible in viewfinder
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
Definitions
- the present invention relates to an image processing device, an imaging device, an image processing method, and an image processing program.
- a digital camera having a so-called manual focus mode in which a user can manually perform focus adjustment in addition to auto-focus using a phase difference detection method or a contrast detection method is widely known.
- a method using a split micro prism screen that displays a phase difference visually by providing a reflex mirror so that focus adjustment can be performed while checking a subject is known.
- a method employing a method for visually confirming contrast is also known.
- a split image is displayed in a live view image (also referred to as a through image) in order to make it easier for a user (for example, a photographer) to focus on a subject in the manual focus mode.
- a split image is, for example, a divided image in which a display area is divided into a plurality of parts (for example, images divided in the vertical direction).
- the split image is shifted in the parallax generation direction (for example, the horizontal direction) according to the focus shift, and the focus is In the combined state, it indicates a divided image in which there is no deviation in the parallax generation direction.
- the user adjusts the focus by operating a manual focus ring (hereinafter referred to as “focus ring”) so that the split image (for example, each image divided in the vertical direction) is not displaced.
- focus ring a manual focus ring
- Patent Document 1 An imaging apparatus described in Japanese Patent Laying-Open No. 2009-147665 (hereinafter referred to as “Patent Document 1”) includes a first subject image formed by a light beam divided by a pupil dividing unit among light beams from an imaging optical system. And a first image and a second image obtained by photoelectrically converting the second subject image, respectively. Then, a split image is generated using the first and second images, and a third image is generated by photoelectrically converting a third subject image formed by a light beam that is not split by the pupil splitting unit. . The third image is displayed on the display unit, the generated split image is displayed in the third image, and the color information extracted from the third image is added to the split image. . In this way, by adding the color information extracted from the third image to the split image, the visibility of the split image can be improved.
- Patent Document 2 An imaging apparatus described in Japanese Patent Application Laid-Open No. 2012-4729 (hereinafter referred to as “Patent Document 2”) flattens an image around a focus detection pixel based on pixel values of normal pixels around the focus detection pixel.
- the edge amount indicating the brightness and the peripheral luminance indicating the luminance of the image around the focus detection pixel are calculated. Also, based on these values, the pixel value of the focus detection pixel is corrected by either one of the average value correction and the gain correction. Then, gamma correction processing is performed on the image data in which the pixel value of the focus detection pixel is corrected.
- Patent Document 3 In an image processing apparatus described in Japanese Patent Laid-Open No. 2007-124604 (hereinafter referred to as “Patent Document 3”), a face detection unit detects a target region from image data. Then, a correction table is created based on the luminance value of the target area obtained according to the detection of the target area by the gamma correction unit and the luminance frequency distribution which is distribution information obtained according to the whole area or a part of the image data. Derived and color-corrected image data using the derived correction table.
- the split image alternates the image area (first image area) in the first image obtained by the pupil division and the image area (second image area) in the second image in a direction intersecting the pupil division direction. It is an image combined. Therefore, depending on the contrast of the subject, it may be difficult to visually recognize the boundary area including the boundary between the first image area and the second image area.
- the present invention has been proposed in view of such a situation, and an image processing apparatus, an imaging apparatus, and an image processing method capable of easily recognizing a boundary region included in an in-focus confirmation image.
- An object of the present invention is to provide an image processing program.
- the image processing apparatus is configured such that the subject image that has passed through the first and second regions in the photographing lens is divided into pupils and formed respectively.
- An image acquisition unit for acquiring first and second images based on the first and second image signals output from the image sensor having the first and second pixel groups that output the first and second image signals; Generating a first display image based on the image signal output from the image sensor and generating a second display image used for focus confirmation based on the first and second images
- the gray level of the second display image in accordance with the gray level correction information determined based on at least one of the spatial frequency characteristics in the second display image and the extreme value of the histogram of the pixel value in the second display image.
- the display unit displays the first display image generated by the generation unit, and the gradation is corrected by the correction unit in the display area of the first display image.
- a display control unit that performs control to display the second display image.
- the subject image that has passed through the first and second regions in the photographic lens is divided into pupils to form the first and second image signals.
- An image acquisition unit that acquires the first and second images based on the first and second image signals output from the imaging device having the first and second pixel groups to be output, and the image output from the imaging device
- a generation unit that generates a first display image based on the signal and generates a second display image used for focus confirmation based on the first and second images
- a second display image A correction unit that corrects the gradation of the second display image in accordance with gradation correction information that is determined based on at least one of a spatial frequency characteristic in the image and an extreme value of a histogram of pixel values in the second display image;
- the display part to be displayed and the display part Te including a display control unit that performs control to display the second display image gradation is corrected by the correction unit.
- the spatial frequency characteristics may be such that the spatial frequency intensity in the second display image is in the maximum direction. Thereby, the visibility of a boundary area
- the gradation correction coefficient has a degree of coincidence between a direction in which the spatial frequency intensity is maximum and a parallax direction based on the first image and the second image. It may be a gradation correction coefficient that increases the contrast of an image area having a luminance less than a predetermined value in accordance with the decrease.
- the second display image includes a first image region and a second image area included in the first image. And the second image region included in the first image region and the second image region included in the second image for display, and the gradation correction information is included in the second image for display. It may be determined based on at least one of the spatial frequency characteristic in the first boundary region including the boundary with the region and the extreme value of the histogram of the pixel value in the second display image. Thereby, more accurate gradation correction information can be defined as compared with the case without this configuration.
- the correction unit may correct the gradation of the first boundary region according to the gradation correction information. Therefore, compared with the case where this configuration is not provided, it is possible to suppress gradation correction from being performed on an area where gradation correction is unnecessary.
- the second display image includes the first image region and the second image area included in the first image.
- the gradation of the first boundary area including the boundary between the first image area and the second image area may be corrected.
- the content of the gradation correction information determined based on the histogram has a plurality of maximum values.
- the contrast between pixels corresponding to a specific pair of maximum values among the plurality of maximum values may be larger than the contrast before correction.
- the ninth aspect of the present invention may further include, in the eighth aspect of the present invention, a determination unit that determines the contrast according to a given instruction. Thereby, usability can be improved compared with the case where this structure is not provided.
- the correction unit is adjacent to the boundary between the first display image and the second display image. Correcting the gradation of the second display image according to the gradation correction information when the occupancy ratio of pixels having a saturation equal to or higher than a predetermined value in the second boundary region on the first display image side is less than the threshold value It is good. As a result, unnecessary gradation correction can be avoided as compared with the case without this configuration.
- the correction unit adjusts the gradation correction information using an adjustment value determined according to the occupation ratio, and the correction unit adjusts the gradation correction information according to the adjusted gradation correction information.
- the gradation of the second display image may be corrected.
- the imaging element forms an object image that has passed through the photographing lens without being divided into pupils.
- a third pixel group that outputs the third image signal, and the generation unit generates the first display image based on the third image signal output from the third pixel group. Good. Thereby, compared with the case where it does not have this structure, the image quality of the 1st image for a display can be improved with a simple structure.
- an image pickup apparatus includes an image processing apparatus according to any one of the first to twelfth aspects of the present invention, the first and second pixels.
- An image sensor having a group, and a storage unit that stores an image generated based on an image signal output from the image sensor.
- an image processing method includes a subject image that has passed through the first and second regions of the photographing lens and is divided into pupils to form images.
- First and second images based on the first and second image signals output from the image sensor having the first and second pixel groups that output the first and second image signals are acquired, and from the image sensor
- a first display image is generated based on the output image signal
- a second display image used for focusing confirmation is generated based on the first and second images
- a second display image is generated.
- a display unit that corrects the gradation of the second display image in accordance with gradation correction information determined based on at least one of the spatial frequency characteristics of the pixel and the extreme value of the histogram of the pixel value in the second display image, and displays the image For the generated first table
- Use image to display, and includes a display area of the first display image performs control to display the second display image with the corrected gradation.
- an image processing method includes a subject image that has passed through the first and second regions of the photographing lens and is divided into pupils to form images.
- First and second images based on the first and second image signals output from the image sensor having the first and second pixel groups that output the first and second image signals are acquired, and from the image sensor
- a first display image is generated based on the output image signal
- a second display image used for focusing confirmation is generated based on the first and second images
- a second display image is generated.
- the gradation of the second display image is corrected according to gradation correction information determined based on at least one of the spatial frequency characteristics in FIG.
- Second display image with corrected gradation Comprising performing control to display.
- an image processing program includes an image acquisition unit, a generation unit, and the like in any one of the first to twelfth aspect of the present invention. This is to make the computer function as the correction unit and the display control unit. As a result, it is possible to make it easier to visually recognize the boundary area included in the in-focus confirmation image as compared with the case without this configuration.
- FIG. 2 is a schematic arrangement diagram illustrating an example of an arrangement of color filters provided in an imaging element included in the imaging apparatus illustrated in FIG. 1.
- FIG. 1 It is a schematic block diagram which shows an example of a structure of the phase difference pixel (1st pixel and 2nd pixel) in the image pick-up element contained in the imaging device shown in FIG.
- FIG. 2 is a schematic diagram illustrating an example of a split image display area and a normal image display area in a display device included in the imaging apparatus illustrated in FIG. 1. It is a graph which shows an example of the pixel value conversion function group used by the correction
- FIG. 13 is an intensity distribution diagram showing an example of an intensity distribution of a spatial frequency obtained by performing Fourier transform on the analysis target region in the split image shown in FIG. 12.
- FIG. 16 is an intensity distribution diagram illustrating an example of an intensity distribution of a spatial frequency obtained by performing Fourier transform on the analysis target region in the split image illustrated in FIG. 15.
- It is a figure which shows an example of the state by which the live view image and the contrast adjustment operation part (soft key) were displayed on the display apparatus contained in the imaging device shown in FIG. 11 is a graph for explaining an adjustment method by a contrast adjustment operation unit of one pixel value conversion function included in the pixel value conversion function group illustrated in FIG. 10. It is a graph which shows the example of an aspect before and after a pixel value conversion function is adjusted.
- It is a flowchart which shows an example of the flow of the image output process which concerns on 2nd Embodiment performed by the image process part shown in FIG.
- FIG. 10 is a schematic diagram showing an example of a split image according to the first to third embodiments, which is an example of a split image divided by oblique dividing lines inclined with respect to the row direction.
- FIG. 10 is a schematic diagram showing an example of a split image according to the first to third embodiments, which is an example of a split image divided by a grid-like dividing line.
- FIG. 6 is a schematic diagram showing an example of a split image formed in a checkered pattern, which is a modification of the split image according to the first to third embodiments. It is a graph which shows an example of the gradation correction curve (pixel value conversion function) based on a spline curve.
- FIG. 1 is a perspective view illustrating an example of an appearance of the imaging apparatus 100 according to the first embodiment
- FIG. 2 is a rear view of the imaging apparatus 100 illustrated in FIG.
- the imaging device 100 is an interchangeable lens camera.
- the photographing apparatus 100 is a digital camera that includes a camera body 200 and an interchangeable lens 300 that is replaceably attached to the camera body 200, and the reflex mirror is omitted.
- the interchangeable lens 300 includes a photographing lens 16 (see FIG. 3) having a focus lens 302 that can be moved in the optical axis direction by a manual operation. This is a digital camera in which the reflex mirror is omitted.
- the camera body 200 is provided with a hybrid finder (registered trademark) 220.
- the hybrid viewfinder 220 here refers to a viewfinder in which, for example, an optical viewfinder (hereinafter referred to as “OVF”) and an electronic viewfinder (hereinafter referred to as “EVF”) are selectively used.
- OPF optical viewfinder
- EMF electronic viewfinder
- the interchangeable lens 300 is replaceably attached to the camera body 200.
- the lens barrel of the interchangeable lens 300 is provided with a focus ring 301 used in the manual focus mode. As the focus ring 301 is manually rotated, the focus lens 302 moves in the optical axis direction, and subject light is imaged on an imaging device 20 (see FIG. 3) described later at a focus position corresponding to the subject distance. .
- a front view of the camera body 200 is provided with an OVF viewfinder window 241 included in the hybrid viewfinder 220.
- a finder switching lever (finder switching unit) 214 is provided on the front surface of the camera body 200. When the viewfinder switching lever 214 is rotated in the direction of the arrow SW, it switches between an optical image that can be viewed with OVF and an electronic image (live view image) that can be viewed with EVF (described later).
- the optical axis L2 of the OVF is an optical axis different from the optical axis L1 of the interchangeable lens 300.
- a release button 211 and a dial 212 for setting a shooting mode, a playback mode, and the like are mainly provided.
- the release button 211 serving as a photographing preparation instruction unit and a photographing instruction unit is configured to detect a two-stage pressing operation between a photographing preparation instruction state and a photographing instruction state.
- the shooting preparation instruction state refers to, for example, a state where the image is pressed from the standby position to the intermediate position (half-pressed position). Point to.
- “a state where the button is pressed from the standby position to the half-pressed position” is referred to as “half-pressed state”
- “a state where the button is pressed from the standby position to the fully-pressed position” is referred to as “full-pressed state”.
- the shooting mode and the playback mode are selectively set as the operation mode in accordance with a user instruction.
- a manual focus mode and an autofocus mode are selectively set according to a user instruction.
- the autofocus mode the shooting conditions are adjusted by pressing the release button 211 halfway, and then exposure (shooting) is performed when the release button 211 is fully pressed.
- the AE Automatic Exposure
- the AF Automatic-Focus
- an OVF viewfinder eyepiece 242 On the back of the camera body 200, an OVF viewfinder eyepiece 242, a display unit 213, a cross key 222, a MENU / OK key 224, and a BACK / DISP button 225 are provided.
- the cross key 222 functions as a multi-function key for outputting various command signals such as selection of one or a plurality of menus, zooming and frame advancement.
- the MENU / OK key 224 has a function as a menu button for instructing to display one or a plurality of menus on the screen of the display unit 213 and a function as an OK button for instructing confirmation and execution of selection contents.
- This is an operation key that combines
- the BACK / DISP button 225 is used for deleting a desired object such as a selection item, canceling a designated content, or returning to the previous operation state.
- the display unit 213 is realized by, for example, an LCD, and is used to display a live view image (through image) that is an example of a continuous frame image obtained by capturing a continuous frame in the shooting mode.
- the display unit 213 is also used to display a still image that is an example of a single frame image obtained by capturing a single frame when a still image shooting instruction is given.
- the display unit 213 is also used for displaying a playback image and a menu screen in the playback mode.
- FIG. 3 is a block diagram showing an example of the electrical configuration (internal configuration) of the imaging apparatus 100 according to the first embodiment.
- the imaging apparatus 100 includes a mount 256 provided in the camera body 200 and a mount 346 on the interchangeable lens 300 side corresponding to the mount 256.
- the interchangeable lens 300 is attached to the camera body 200 in a replaceable manner by the mount 346 being coupled to the mount 256.
- the interchangeable lens 300 includes a slide mechanism 303 and a motor 304.
- the slide mechanism 303 moves the focus lens 302 in the direction of the optical axis L1 by operating the focus ring 301.
- a focus lens 302 is attached to the slide mechanism 303 so as to be slidable in the direction of the optical axis L1.
- a motor 304 is connected to the slide mechanism 303, and the slide mechanism 303 receives the power of the motor 304 and slides the focus lens 302 along the direction of the optical axis L1.
- the motor 304 is connected to the camera body 200 via mounts 256 and 346, and driving is controlled according to a command from the camera body 200.
- a stepping motor is applied as an example of the motor 304. Therefore, the motor 304 operates in synchronization with the pulse power according to a command from the camera body 200.
- the imaging device 100 is a digital camera that records captured still images and moving images, and the operation of the entire camera is controlled by a CPU (central processing unit) 12.
- the imaging apparatus 100 includes an operation unit 14, an interface unit 24, a memory 26, and an encoder 34.
- the imaging unit 100 includes display control units 36A and 36B which are examples of the display control unit according to the present invention.
- the imaging unit 100 includes an eyepiece detection unit 37.
- the imaging apparatus 100 includes an image processing unit 28 that is an example of an image acquisition unit, a generation unit, a correction unit, and a determination unit according to the present invention.
- display control unit 36 when it is not necessary to distinguish between the display control units 36A and 36B, they are referred to as “display control unit 36”.
- the display control unit 36 is provided as a hardware configuration different from the image processing unit 28.
- the present invention is not limited to this, and the image processing unit 28 has the same function as the display control unit 36. In this case, the display control unit 36 is not necessary.
- the CPU 12, the operation unit 14, the interface unit 24, the memory 26, which is an example of a storage unit, the image processing unit 28, the encoder 34, the display control units 36 ⁇ / b> A and 36 ⁇ / b> B, the eyepiece detection unit 37, and the external interface (I / F) 39 40 are connected to each other.
- the memory 26 includes a non-volatile storage area (such as an EEPROM) that stores parameters, programs, and the like, and a volatile storage area (such as an SDRAM) that temporarily stores various information such as images.
- the CPU 12 performs focusing control by driving and controlling the motor 304 so that the contrast value of the image obtained by imaging is maximized.
- the CPU 12 calculates AE information that is a physical quantity indicating the brightness of an image obtained by imaging.
- the CPU 12 derives the shutter speed and F value corresponding to the brightness of the image indicated by the AE information. Then, the exposure state is set by controlling each related part so that the derived shutter speed and F value are obtained.
- the operation unit 14 is a user interface operated by the user when giving various instructions to the imaging apparatus 100. Various instructions received by the operation unit 14 are output as operation signals to the CPU 12, and the CPU 12 executes processing according to the operation signals input from the operation unit 14.
- the operation unit 14 includes a release button 211, a focus mode switching unit 212 for selecting a shooting mode, a finder switching lever 214, a cross key 222, a MENU / OK key 224, and a BACK / DISP button 225.
- the operation unit 14 also includes a touch panel 215 that accepts various types of information. The touch panel 215 is superimposed on the display screen of the display unit 213, for example.
- the camera body 200 includes a position detection unit 23.
- the position detection unit 23 is connected to the CPU 12.
- the position detection unit 23 is connected to the focus ring 301 via mounts 256 and 346, detects the rotation angle of the focus ring 301, and outputs rotation angle information indicating the rotation angle as a detection result to the CPU 12.
- the CPU 12 executes processing according to the rotation angle information input from the position detection unit 23.
- the image light indicating the subject is incident on the light receiving surface of the color image sensor (for example, a CMOS sensor) 20 via the photographing lens 16 including the focus lens 302 that can be moved manually and the shutter 18.
- the signal charge accumulated in the image sensor 20 is sequentially read out as a digital signal corresponding to the signal charge (voltage) by a read signal applied from the device control unit 22.
- the imaging element 20 has a so-called electronic shutter function, and controls the charge accumulation time (shutter speed) of each photosensor according to the timing of the readout signal by using the electronic shutter function.
- the image sensor 20 according to the first embodiment is a CMOS image sensor, but is not limited thereto, and may be a CCD image sensor.
- the image sensor 20 includes a color filter 21 shown in FIG. 4 as an example.
- the color filter 21 includes a G filter corresponding to G (green) that contributes most to obtain a luminance signal, an R filter corresponding to R (red), and a B filter corresponding to B (blue).
- G filter corresponding to G green
- R filter corresponding to R red
- B filter corresponding to B blue
- “4896 ⁇ 3265” pixels are adopted as an example of the number of pixels of the image sensor 20
- the G filter, R filter, and B filter are arranged in the horizontal direction (row direction) with respect to these pixels. They are arranged with a predetermined periodicity in each of the vertical directions (column directions). Therefore, the imaging apparatus 100 can perform processing according to a repetitive pattern when performing synchronization (interpolation) processing of R, G, and B signals.
- the synchronization process is a process for calculating all color information for each pixel from a mosaic image corresponding to a color filter array of a single-plate color image sensor.
- the synchronization process means a process for calculating color information of all RGB for each pixel from a mosaic image made of RGB.
- the imaging apparatus 100 has a phase difference AF function.
- the image sensor 20 includes a plurality of phase difference detection pixels used when the phase difference AF function is activated.
- the plurality of phase difference detection pixels are arranged in a predetermined pattern. As shown in FIG. 4 as an example, a light shielding member 20A that shields the left half pixel in the row direction and a light shielding member 20B that shields the right half pixel in the row direction are provided on the phase difference detection pixels. Yes.
- the phase difference detection pixel provided with the light shielding member 20A is referred to as a “first pixel”
- the phase difference pixel provided with the light shielding member 20B is represented by “ This is referred to as a “second pixel”. Further, when there is no need to distinguish between the first pixel and the second pixel, they are referred to as “phase difference pixels”.
- the light blocking member 20A is provided on the front side (microlens 19 side) of the photodiode PD, and the left half of the light receiving surface (the left side when the subject is viewed from the light receiving surface (in other words, Shield the right side)) when facing the light receiving surface from the subject.
- the light shielding member 20B is provided on the front side of the photodiode PD, and the right half of the light receiving surface (the right side when facing the subject from the light receiving surface (in other words, the left side when facing the light receiving surface from the subject)). Shield from light.
- the microlens 19 and the light shielding members 20A and 20B function as a pupil division unit, the first pixel receives only the left side of the optical axis of the light beam passing through the exit pupil of the photographing lens 16, and the second pixel is a photographing part. Only the right side of the optical axis of the light beam passing through the exit pupil of the lens 16 is received. In this way, the light beam passing through the exit pupil is divided into left and right by the microlens 19 and the light shielding members 20A and 20B, which are pupil dividing portions, and are incident on the first pixel and the second pixel, respectively.
- the subject image corresponding to the left half of the light beam passing through the exit pupil of the photographing lens 16 and the subject image corresponding to the right half of the light beam are in focus (in focus).
- the portion forms an image at the same position on the image sensor 20.
- the front pin portion or the rear pin portion is incident on different positions on the image sensor 20 (the phase is shifted).
- the subject image corresponding to the left half light beam and the subject image corresponding to the right half light beam can be acquired as parallax images (left eye image and right eye image described later) having different parallaxes.
- the imaging apparatus 100 detects the amount of phase shift based on the pixel value of the first pixel and the pixel value of the second pixel by using the phase difference AF function. Then, the focal position of the photographing lens is adjusted based on the detected phase shift amount.
- the light shielding members 20A and 20B are referred to as “light shielding members” without reference numerals.
- the image sensor 20 is classified into a first pixel group, a second pixel group, and a third pixel group.
- the first pixel group refers to a plurality of first pixels, for example.
- the second pixel group refers to, for example, a plurality of second pixels.
- the third pixel group refers to, for example, a plurality of normal pixels (an example of a third pixel). In the example shown in FIG. 4, one pixel and the second pixel are alternately arranged in a straight line in each of the row direction and the column direction at intervals of a plurality of pixels (in the example shown in FIG. 4, two pixels).
- the normal pixel is arranged between the first pixel and the second pixel.
- the RAW image output from the first pixel group is referred to as a “first image”
- the RAW image output from the second pixel group is referred to as a “second image”
- the third image The RAW image output from the pixel group is referred to as a “third image”.
- the image sensor 20 outputs the first image (digital signal indicating the pixel value of each first pixel L) from the first pixel group, and the second image from the second pixel group. (Digital signal indicating the pixel value of each second pixel R) is output. Further, the image sensor 20 outputs a third image (a digital signal indicating the pixel value of each normal pixel) from the third pixel group. Note that the third image output from the third pixel group is a chromatic image, for example, a color image having the same color array as the normal pixel array.
- the first image, the second image, and the third image output from the image sensor 20 are temporarily stored in a volatile storage area in the memory 26 via the interface unit 24.
- the image processing unit 28 performs various kinds of image processing on the first to third images stored in the memory 26. As shown in FIG. 6 as an example, the image processing unit 28 includes an image acquisition unit 28A, a generation unit 28B, a correction unit 28C, and a determination unit 28D.
- the image processing unit 28 is realized by an ASIC (Application Specific Integrated Circuit) which is an integrated circuit in which a plurality of functions related to image processing are integrated into one.
- ASIC Application Specific Integrated Circuit
- the hardware configuration is not limited to this, and may be, for example, a programmable logic device or another hardware configuration such as a computer including a CPU, a ROM, and a RAM.
- the image acquisition unit 28A acquires the first image and the second image output from the image sensor 20.
- the generation unit 28B generates a first display image based on the third image output from the imaging element 20, and based on the first image and the second image acquired by the image acquisition unit 28A. To generate a second display image to be used for in-focus confirmation.
- the generation unit 28B includes a normal processing unit 30 and a split image processing unit 32 as shown in FIG. 7 as an example.
- the normal processing unit 30 processes the R, G, and B signals corresponding to the third pixel group to generate a chromatic color normal image that is an example of the first display image.
- the split image processing unit 32 generates an achromatic split image that is an example of a second display image by processing G signals corresponding to the first pixel group and the second pixel group.
- the correction unit 28C corrects the gradation of the split image according to the gradation correction information determined based on at least one of the spatial frequency characteristic in the split image and the extreme value of the histogram of the pixel value in the split image.
- the content of the gradation correction information determined based on the histogram for example, when the histogram has a plurality of maximum values, the contrast between pixels corresponding to a specific pair of maximum values among the plurality of maximum values is not corrected. The content which makes it larger than contrast is mentioned.
- the determining unit 28D determines the contrast between pixels corresponding to a specific pair of local maximum values among the number of local maximum values according to a given instruction (for example, an instruction by the user via the operation unit 14).
- the encoder 34 converts the input signal into a signal of another format and outputs it.
- the hybrid viewfinder 220 has an LCD 247 that displays an electronic image.
- the number of pixels in a predetermined direction on the LCD 247 (for example, the number of pixels in the row direction, which is the parallax generation direction) is smaller than the number of pixels in the same direction on the display unit 213.
- the display control unit 36A is connected to the display unit 213, and the display control part 36B is connected to the LCD 247. By selectively controlling the LCD 247 and the display unit 213, an image is displayed on the LCD 247 or the display unit 213.
- the display unit 213 and the LCD 247 are referred to as “display devices” when it is not necessary to distinguish between them.
- the imaging apparatus 100 is configured to be able to switch between a manual focus mode and an autofocus mode by a dial 212 (focus mode switching unit).
- the display control unit 36 causes the display device to display a live view image obtained by combining the split images.
- the CPU 12 operates as a phase difference detection unit and an automatic focus adjustment unit.
- the phase difference detection unit detects a phase difference between the first image output from the first pixel group and the second image output from the second pixel group.
- the automatic focus adjustment unit controls the motor 304 from the device control unit 22 via the mounts 256 and 346 so that the defocus amount of the focus lens 302 is zero based on the detected phase difference, and controls the focus lens 302. Move to the in-focus position.
- defocus amount refers to, for example, the amount of phase shift between the first image and the second image.
- the eyepiece detection unit 37 detects that a user (for example, a photographer) has looked into the viewfinder eyepiece unit 242 and outputs the detection result to the CPU 12. Therefore, the CPU 12 can grasp whether or not the finder eyepiece unit 242 is used based on the detection result of the eyepiece detection unit 37.
- the external I / F 39 is connected to a communication network such as a LAN (Local Area Network) or the Internet, and controls transmission / reception of various information between the external device (for example, a printer) and the CPU 12 via the communication network. Therefore, when a printer is connected as an external device, the imaging apparatus 100 can output a captured still image to the printer for printing. Further, when a display is connected as an external device, the imaging apparatus 100 can output and display a captured still image or live view image on the display.
- a communication network such as a LAN (Local Area Network) or the Internet
- FIG. 8 is a functional block diagram illustrating an example of main functions of the imaging apparatus 100 according to the first embodiment.
- symbol is attached
- the normal processing unit 30 and the split image processing unit 32 each have a WB gain unit, a gamma correction unit, and a synchronization processing unit (not shown), and with respect to the original digital signal (RAW image) temporarily stored in the memory 26.
- Each processing unit sequentially performs signal processing. That is, the WB gain unit executes white balance (WB) by adjusting the gains of the R, G, and B signals.
- the gamma correction unit performs gamma correction on each of the R, G, and B signals that have been subjected to WB by the WB gain unit.
- the synchronization processing unit performs color interpolation processing corresponding to the color filter array of the image sensor 20, and generates synchronized R, G, B signals.
- the normal processing unit 30 and the split image processing unit 32 perform image processing on the RAW image in parallel every time a RAW image for one screen is acquired by the image sensor 20.
- the normal processing unit 30 receives R, G, and B raw images from the interface unit 24, and converts the R, G, and B pixels of the third pixel group into the first pixel group and the second pixel group.
- a normal image for recording is generated by interpolating with peripheral pixels of the same color (for example, adjacent G pixels).
- the normal processing unit 30 outputs the generated image data of the normal image for recording to the encoder 34.
- the R, G, and B signals processed by the normal processing unit 30 are converted (encoded) into recording signals by the encoder 34 and recorded in the recording unit 40.
- a normal image for display that is an image based on the third image processed by the normal processing unit 30 is output to the display control unit 36.
- the term “for recording” and “for display” are used. Is referred to as a “normal image”.
- the image sensor 20 can change the exposure conditions (shutter speed by an electronic shutter as an example) of each of the first pixel group and the second pixel group, and thereby can simultaneously acquire images having different exposure conditions. . Therefore, the image processing unit 28 can generate an image with a wide dynamic range based on images with different exposure conditions. In addition, a plurality of images can be simultaneously acquired under the same exposure condition, and by adding these images, a high-sensitivity image with little noise can be generated, or a high-resolution image can be generated.
- the split image processing unit 32 extracts the G signal of the first pixel group and the second pixel group from the RAW image once stored in the memory 26, and the G of the first pixel group and the second pixel group. An achromatic split image is generated based on the signal.
- the pixel group corresponding to each of the first pixel group and the second pixel group extracted from the RAW image is a pixel group including G filter pixels as described above. Therefore, the split image processing unit 32 converts the achromatic left parallax image and the achromatic right parallax image based on the G signal of the pixel group corresponding to each of the first pixel group and the second pixel group. Can be generated.
- the above “achromatic left parallax image” is referred to as a “left eye image”
- the above “achromatic right parallax image” is referred to as a “right eye image”.
- the split image processing unit 32 generates a split image.
- the split image includes a left eye image based on the first image output from the first pixel group and a right eye image based on the second image output from the second pixel group in a predetermined direction (for example, occurrence of parallax). In a direction intersecting the direction).
- the generated split image image data is output to the display control unit 36 by the split image processing unit 32.
- the display control unit 36 includes normal image data for display input from the normal processing unit 30, split image image data corresponding to the first and second pixel groups input from the split image processing unit 32, and Display image data is generated based on the above.
- the display control unit 36 indicates the image data input from the split image processing unit 32 in the display area of the normal image indicated by the image data corresponding to the third pixel group input from the normal processing unit 30. Composite split images.
- the image data obtained by the synthesis is output to the display device. That is, the display control unit 36A outputs the image data to the display unit 213, and the display control unit 36B outputs the image data to the LCD 247.
- the display device continuously displays the normal image as a moving image, and continuously displays the split image as a moving image in the display area of the normal image.
- the split image is displayed in a rectangular frame at the center of the screen of the display device, and a normal image is displayed in the outer peripheral area of the split image. Note that the edge line representing the rectangular frame shown in FIG. 9 is not actually displayed, but is shown in FIG. 9 for convenience of explanation.
- the split image is combined with the normal image by fitting the split image in place of a part of the normal image.
- the present invention is not limited to this. It is also possible to use a synthesis method in which a split image is superimposed on the image.
- a combining method may be used in which the transmittance of a part of the normal image on which the split image is superimposed and the split image are appropriately adjusted and superimposed.
- the hybrid finder 220 includes an OVF 240 and an EVF 248.
- the OVF 240 is an inverse Galileo finder having an objective lens 244 and an eyepiece 246, and the EVF 248 has an LCD 247, a prism 245, and an eyepiece 246.
- a liquid crystal shutter 243 is disposed in front of the objective lens 244, and the liquid crystal shutter 243 shields light so that an optical image does not enter the objective lens 244 when the EVF 248 is used.
- the prism 245 reflects the electronic image or various information displayed on the LCD 247 to guide it to the eyepiece 246, and combines the optical image and information (electronic image and various information) displayed on the LCD 247.
- an OVF mode in which an optical image can be visually recognized by the OVF 240 and an electronic image can be visually recognized by the EVF 248 each time it is rotated.
- the EVF mode is switched alternately.
- the display control unit 36B controls the liquid crystal shutter 243 to be in a non-light-shielding state so that an optical image can be visually recognized from the eyepiece unit. Further, only the split image is displayed on the LCD 247. Thereby, a finder image in which a split image is superimposed on a part of the optical image can be displayed.
- the display control unit 36B controls the liquid crystal shutter 243 to be in a light shielding state so that only the electronic image displayed on the LCD 247 can be visually recognized from the eyepiece unit.
- the image data equivalent to the image data obtained by combining the split image output to the display unit 213 is input to the LCD 247, whereby the split image is combined with a part of the normal image in the same manner as the display unit 213. Electronic images can be displayed.
- the correction unit 28C illustrated in FIG. 6 holds the pixel value conversion function group 262 illustrated in FIG. 10 as an example, and the gradation of the split image using the pixel value conversion function group 262 is illustrated. Correct.
- the pixel value conversion function group 262 has a plurality of nonlinear functions for converting pixel values.
- the pixel value conversion function group 262 includes functions 262A, 262B, 262C, 262D, and 262E (an example of gradation correction information according to the present invention) as a plurality of nonlinear functions.
- the functions 262A, 262B, 262C, 262D, and 262E are selectively used by the correction unit 28C shown in FIG.
- the functions 262A, 262B, 262C, 262D, and 262E (hereinafter referred to as “pixel value conversion function” when there is no need to distinguish between them) are output pixel values as input pixel values as shown in FIG. 10 as an example.
- Each pixel value conversion function is associated with one of the first to sixth spatial frequency characteristics (hereinafter referred to as “spatial frequency characteristics” when there is no need to distinguish between them) in a specific area included in the split image.
- the first and sixth spatial frequency characteristics are associated with the function 262A.
- the second spatial frequency characteristic is associated with the function 262B.
- the third spatial frequency characteristic is associated with the function 262C.
- the fourth spatial frequency is associated with the function 262D.
- the fifth spatial frequency characteristic is associated with the function 262E.
- examples of the “specific region included in the split image” include a boundary region including the boundary between the left eye image and the right eye image, or the entire region of the split image.
- the first spatial frequency characteristic is a characteristic in which the spatial frequency intensity distribution in a specific region included in the split image is remarkably spread in the first direction (density higher than the reference value and higher than the other directions).
- the second spatial frequency characteristic means a characteristic in which the spatial frequency intensity distribution in a specific region included in the split image is remarkably spread in the second direction.
- the third spatial frequency characteristic means a characteristic in which the spatial frequency intensity distribution in a specific region included in the split image is remarkably spread in the third direction.
- the fourth spatial frequency characteristic means a characteristic in which the spatial frequency intensity distribution in a specific region included in the split image is remarkably spread in the fourth direction.
- the fifth spatial frequency characteristic means a characteristic in which the spatial frequency intensity distribution in a specific region included in the split image is remarkably spread in the fifth direction.
- the sixth spatial frequency characteristic means a characteristic in which the spatial frequency intensity distribution in a specific region included in the split image does not remarkably spread in any of the first to fifth directions.
- the characteristic that the intensity distribution does not significantly spread in any of the first to fifth directions is, for example, that the density of the intensity distribution is less than the reference value in any of the first to fifth directions. Means that.
- the table indicating the pixel value conversion function is held by the correction unit 28C.
- the present invention is not limited to this, and an arithmetic expression indicating the pixel value conversion function may be held by the correction unit 28C. Good.
- the correction unit 28C illustrated in FIG. 6 holds a coefficient derivation function 264 illustrated in FIG. 11 as an example.
- the coefficient derivation function 264 is displayed as a coefficient for adjusting the pixel value conversion function (an example of the adjustment value according to the present invention) in the boundary display area 266 (an example of the second boundary area according to the present invention) illustrated in FIG. 9 as an example.
- This is a function for deriving a coefficient in accordance with the occupation ratio of the high color pixels in the normal image (hereinafter referred to as “high saturation occupation ratio”).
- the coefficient derivation function 264 is a curve function in which the coefficient gradually decreases as the high saturation occupancy rate increases.
- the boundary display area 266 indicates a display area within a predetermined number of pixels (for example, 50 pixels) from the boundary with the split image in the normal image display area, for example.
- a predetermined number of pixels for example, 50 pixels
- a high color pixel refers to a pixel having a saturation of a predetermined value (for example, 30%) or more.
- the high saturation occupancy rate refers to the ratio of the number of high color pixels to the total number of pixels included in the normal image displayed in the boundary display area 266, for example.
- 50 pixels are exemplified as the predetermined number of pixels. However, the number is not limited to this, and may be about 10% of the maximum number of pixels.
- a general split image includes a plurality of left-eye images and right-eye images that are alternately combined in the direction intersecting the parallax generation direction (in the example shown in FIG. 12, the vertical direction).
- the left eye image and the right eye image included in the split image are shifted in a predetermined direction (in the example shown in FIG. 12, the parallax generation direction (left-right direction)) according to the in-focus state.
- the boundary region 268 (an example of the first boundary region according to the present invention) between the left eye image and the right eye image included in the split image is difficult to visually recognize.
- the boundary area 268 includes, for example, the boundary between the left eye image and the right eye image, and each of the predetermined number of pixels (for example, 50 pixels) on the left eye image area side and the right eye image area side with the boundary as the center line. This is a region surrounded by positions separated by about 10% of the minute or the maximum number of pixels). In other words, the boundary region 268 is difficult to visually recognize.
- the boundary region 268 it is difficult to visually distinguish the boundary region 268 from the non-boundary region 270 (region other than the boundary region 268 in the split image) included in the split image. It means that.
- the main subject is orange (single color)
- the left eye image and the right eye image indicating the main subject included in the split image and the boundary region 268 and the non-boundary region 270 are visually distinguished. It becomes difficult.
- the background color of the main subject is the same color as the color of the main subject (light orange in the example shown in FIG. 12), it is expected that the visibility of the boundary region 268 is further deteriorated when the saturation is approximated.
- the image processing unit 28 performs the image output process shown in FIG. 13 as an example in the manual focus mode.
- image output processing performed by the image processing unit 28 in the manual focus mode will be described with reference to FIG.
- the CPU 12 executes the image output process program and the image capturing apparatus 100 executes the image. An output process may be performed.
- step 400 the generation unit 28B determines whether or not the image acquisition unit 28A has acquired the first to third images. If it is determined in step 400 that the image acquisition unit 28A has not acquired the first to third images, the determination is negative and the determination in step 400 is performed again. If the image acquisition unit 28A acquires the first to third images at step 400, the determination is affirmed and the routine proceeds to step 402.
- step 402 the generation unit 28B generates a normal image based on the third image acquired in step 400.
- step 402 the generation unit 28B generates a left eye image and a right eye image based on the first and second images acquired in step 400, and based on the generated left eye image and right eye image.
- a split image is generated.
- step 404 the high saturation occupancy ratio in the normal image displayed in the boundary display area 266 among the normal images generated in step 402 is calculated by the correction unit 28C, and then the process proceeds to step 406.
- step 406 the correction unit 28C determines whether or not the high saturation occupancy is less than a threshold value.
- the threshold value refers to, for example, 70% of the total number of pixels of the normal image displayed in the boundary display area 266. If the high saturation occupancy rate is greater than or equal to the threshold value in step 406, the determination is negative and the routine proceeds to step 426. If the high saturation occupancy is less than the threshold value in step 406, the determination is affirmed and the routine proceeds to step 408.
- the correction unit 28C performs Fourier transform on the analysis target region included in the split image generated in Step 402.
- the analysis target area indicates, for example, a boundary area 268 shown in FIG.
- FIG. 14 shows an example of the spatial frequency intensity distribution, which is an example of the result obtained by performing the Fourier transform in step 408.
- the spatial frequency intensity distribution shown in FIG. 14 is divided for each angular component (an example of the first to fifth directions).
- the angle component refers to, for example, a two-dimensional coordinate region obtained by dividing the intensity distribution diagram by a predetermined plurality of angles with the center of the intensity distribution diagram shown in FIG. 14 as the origin. In the example illustrated in FIG.
- a 0 ° component an example of the first direction
- a 22.5 ° component an example of the second direction
- a 45 ° component an example of the third direction
- 67 A 5 ° component (an example of the fourth direction) and a 90 ° component (an example of the fifth direction) are illustrated.
- the correction unit 28C determines whether or not there is an angle component whose density of the spatial frequency intensity distribution obtained by performing the Fourier transform in step 408 is equal to or higher than a reference value. In step 410, if there is no angular component whose density of the spatial frequency intensity distribution obtained by performing the Fourier transform in step 408 exceeds the reference value, the determination is negative and the routine proceeds to step 412. In step 410, if there is an angle component whose density of the spatial frequency intensity distribution obtained by performing the Fourier transform in step 408 exceeds the reference value, the determination is affirmed and the routine proceeds to step 414.
- the spatial frequency characteristic is the sixth spatial frequency characteristic. It means that there is.
- the case where there is an angle component whose density of the spatial frequency intensity distribution obtained by performing the Fourier transform in step 408 exceeds the reference value is, for example, that the spatial frequency characteristics are first to fifth. It means any one of the spatial frequency characteristics.
- a 0 ° component (first spatial frequency characteristic) is illustrated as an angle component whose density of spatial frequency intensity distribution is greater than or equal to a reference value. Therefore, when the intensity distribution shown in FIG. 14 is obtained, in step 410, the correction component 28C performs angular transformation in step 408 to obtain an angular component whose density of the spatial frequency intensity distribution obtained by performing the Fourier transform in step 408 is greater than or equal to a reference value. Is determined to exist.
- step 412 the correction unit 28C derives a function 262A that is a pixel value conversion function corresponding to the sixth spatial frequency. Further, the coefficient corresponding to the high saturation occupancy calculated in step 404 is derived from the coefficient derivation function 264 by the correction unit 28C. Then, a pixel value conversion function (here, function 262A as an example) is adjusted by an algorithm such as spline interpolation based on specific three points including one point obtained by multiplying a specific pixel value by a derived coefficient. Then, the process proceeds to step 46.
- a pixel value conversion function here, function 262A as an example
- the specific pixel value refers to, for example, the mode value, median value, pixel value of the main subject region, or a predetermined pixel value of the histogram of the pixel value in the split image generated in step 402. .
- the specific three points indicate, for example, three points (0, 0), (255, 255), and (specific pixel value, adjusted specific pixel value).
- spline interpolation based on a plurality of specific points (four or more points) including a plurality of points (two or more points) obtained by multiplying a plurality of specific pixel values by the coefficients derived in step 412 above.
- the pixel value conversion function may be adjusted by an algorithm.
- the adjusted pixel value conversion function shown in FIG. 27 may be employed. In the example shown in FIG. 27, two points (specific pixel values A and B) obtained by multiplying the two specific pixel values by the coefficient derived in step 412 as the adjusted pixel value conversion function.
- step 414 the correction unit 28C performs the maximum intensity distribution in the spatial frequency intensity distribution obtained by performing the Fourier transform in step 408 (here, as an example, the direction in which the spatial frequency intensity is maximum). A corresponding pixel value conversion function is derived.
- the direction in which the spatial frequency intensity is maximum is the direction in which the straight line becomes the longest when a straight line passing through the origin is drawn in a region where the spatial frequency intensity is a predetermined value or more. Point to.
- the direction in which the spatial frequency intensity is maximum is the horizontal axis direction.
- the density of the intensity distribution in the 0 ° component (the angle component including the horizontal axis direction) is higher than the density of the intensity distribution in the other angle components.
- a function 262A which is a corresponding pixel value conversion function, is derived. Further, when the split image shown in FIG.
- step 15 is generated as an example in step 402 and the Fourier transform is performed on the analysis target region included in the generated split image, the intensity distribution shown in FIG. 16 is obtained as an example.
- a function 262D that is a pixel value conversion function corresponding to the 67.5 ° component. Is derived.
- Step 414 the coefficient corresponding to the high saturation occupancy calculated in Step 404 is derived from the coefficient derivation function 264 by the correction unit 28C. Then, a pixel value conversion function (in this case, an example) is performed from the specific three points including one point obtained by multiplying the specific pixel value by the derived coefficient by the correction unit 28C using the algorithm such as the spline interpolation described above. As a function 262A) is adjusted.
- the correction unit 28C uses the direction in which the spatial frequency intensity is maximum (here, the horizontal axis direction of the intensity distribution diagram shown in FIG. 14 as an example) and the first image and the second image acquired in step 400.
- the degree of coincidence with the parallax direction (parallax direction) based on the image is calculated.
- the pixel value conversion function multiplied by the coefficient derived in this step 414 (in this example, an algorithm such as spline interpolation is adjusted so that the lower the calculated degree of coincidence, the larger the contrast on the low luminance side is.
- the function 262A is further adjusted.
- the low luminance side means, for example, less than a predetermined pixel value (for example, a pixel value less than 100 within a range of 0 to 255).
- step 416 the correction unit 28C generates a histogram of pixel values of the analysis target region (here, the boundary region 268 shown in FIG. 12 as an example) included in the split image generated in step 402, and then step 418.
- the analysis target region here, the boundary region 268 shown in FIG. 12 as an example
- step 418 the correction unit 28C determines whether or not there are a plurality of maximum values in the histogram generated in step 416. In step 418, if a plurality of maximum values do not exist in the histogram generated in step 416, the determination is negative and the process proceeds to step 424. If there are a plurality of maximum values in the histogram generated in step 416 at step 418, the determination is affirmed and the routine proceeds to step 420. For example, in the histogram generated in step 416, as shown in FIG. 17 as an example, if there is a maximum value in each of the pixel values a and b, the determination in step 418 is affirmed.
- the determination unit 28D determines whether or not the contrast adjustment operation unit 270 illustrated in FIG. 18 is operated as an example.
- 18 is a software key (soft key) displayed on the display unit 213, and includes an F button 270A, a B button 270B, an upward direction button 270C, and a downward direction button 270D.
- the contrast adjustment operation unit 270 is displayed according to a display start instruction via the operation unit 14 by the user, and is not displayed according to a display end instruction via the operation unit 14 by the user.
- the contrast adjustment operation unit 270 displayed on the display unit 213 is operated by the user via the touch panel 215.
- the contrast adjustment operation unit 270 when the contrast adjustment operation unit 270 is operated, for example, when the upward button 270C is pressed while the pressed state of the F button 270A is maintained, and the pressed state of the B button 270B is maintained. This refers to the case where the down direction button 270D is pressed.
- step 420 when the contrast adjustment operation unit 270 is not operated, the determination is denied and the process proceeds to step 424. If the contrast adjustment operation unit 270 is operated in step 420, the determination is affirmed and the process proceeds to step 422.
- step 422 the determination unit 28D causes the pixel value conversion function so that the contrast between the pixel values corresponding to the two specific maximum values in the histogram generated in step 416 increases according to the operation amount of the contrast adjustment operation unit 270. Is adjusted.
- the specific two maximum values refer to, for example, any two adjacent maximum values, two adjacent maximum values with the smallest difference between the maximum values, and the like.
- pixel values a and b are shown as pixel values corresponding to two specific maximum values.
- the pixel value conversion function to be adjusted in step 422 is the pixel value conversion function adjusted in step 412 or step 414.
- the pixel value conversion function adjusted in step 412 or step 414 by the determining unit 28D is the gain difference between the output pixel values with respect to the pixel values (input pixel values) a and b. It is adjusted so as to increase according to the operation amount. For example, as shown in FIG. 19, when the upward direction button 270C is pressed while the pressed state of the F button 270A is maintained, the gain of the pixel value b becomes the amount of pressing of the upward direction button 270C (for example, the continuous pressing time). , The number of times of pressing or pressing force, etc.).
- the gain of the pixel value a decreases according to the pressed amount of the down direction button 270D.
- the gain of the pixel value a is reduced by 20%, and the gain of the pixel value b is increased by 10%.
- the gradation of the image included in the correction target region is corrected by the correction unit 28C according to the pixel value conversion function adjusted in step 412, step 414, or step 422.
- the correction target area refers to, for example, a boundary area 268 shown in FIG. Note that the correction of the gradation of the image is realized by converting the pixel value (adjusting the contrast) according to the pixel value conversion function.
- the normal image and split image generated in step 402 by the correction unit 28C, or the normal image generated in step 402 and the split image in which the gradation of the correction target area is corrected in step 424 are obtained. It is output to the display control unit 36.
- the display control unit 36 causes the display device to continuously display the normal image as a moving image, and continuously displays the split image as a moving image within the display area of the normal image. Control to display. Thereby, the live view image is displayed on the display device.
- the gradation of the correction target region in the split image is corrected in step 426, the position of the boundary region 268 in the split image included in the live view image can be easily identified visually. Therefore, the user can easily confirm the in-focus state of the taking lens 16 by using the split image included in the live view image.
- the correction unit 28C causes the split image to be divided in accordance with the pixel value conversion function determined based on the spatial frequency characteristics in the split image and the maximum value of the pixel value histogram. The key is corrected.
- the imaging apparatus 100 according to the first embodiment can make it easier to visually recognize the boundary region 268 included in the split image, compared to the case where the present configuration is not provided.
- the correction unit 28C corrects the gradation of the split image according to the pixel value conversion function associated with the direction having the maximum spatial frequency characteristic intensity.
- the imaging apparatus 100 according to the first embodiment is compared with the case where the gradation of the split image is corrected according to the pixel value conversion function associated with a direction other than the direction in which the intensity distribution of the spatial frequency characteristic is the maximum.
- the visibility of the boundary region 268 can be improved.
- a pixel value conversion function a pixel value that increases the contrast of an image region having a luminance less than a predetermined value in accordance with a decrease in the degree of coincidence between the direction with the highest spatial frequency intensity and the parallax direction A conversion function is adopted.
- the correction unit 28C determines a pixel value conversion function based on the spatial frequency characteristics in the boundary region 268 and the maximum value of the pixel value histogram. Thereby, the imaging device 100 according to the first embodiment can determine an accurate pixel value conversion function as compared with the case where the present configuration is not provided.
- the gradation of the image included in the boundary region 268 is corrected by the correction unit 28C according to the pixel value conversion function.
- the imaging device 100 according to the first embodiment can make the user more clearly recognize the difference between the boundary region 268 and other regions.
- the correction unit 28C causes the contrast of the pixel value corresponding to a specific pair of maximum values in the histogram to be larger than the contrast before correction by the correction unit 28C. ing.
- the imaging device 100 according to the first embodiment improves the visibility of the boundary region 268 as compared with the case where the contrast of the pixel value corresponding to the specific pair of maximum values is not made larger than the contrast before correction. be able to.
- the determination unit 28D determines the contrast of the pixel value corresponding to a specific pair of local maximum values according to a given instruction. Thereby, the imaging apparatus 100 according to the first embodiment can improve usability as compared with the case where the contrast of the pixel value corresponding to the specific pair of local maximum values is not determined according to the given instruction. .
- the correction unit 28C corrects the gradation of the split image according to the pixel value conversion function when the chromatic color occupancy is less than the threshold. Accordingly, the imaging apparatus 100 according to the first embodiment performs unnecessary gradation correction compared to the case where the gradation of the split image is corrected according to the pixel value conversion function when the chromatic color occupancy is less than the threshold. Implementation can be avoided.
- the correction unit 28C adjusts the pixel value conversion function using a coefficient determined according to the chromatic color occupancy, and the split image of the split image is adjusted according to the adjusted pixel value conversion function.
- the gradation is corrected.
- the imaging device 100 according to the first embodiment can improve the visibility of the boundary region 268 as compared with the case where the pixel value conversion function is not adjusted using a coefficient determined according to the chromatic color occupancy. .
- the case where the pixel value conversion function is adjusted based on the histogram of the pixel values of the analysis target region in the split image has been described as an example.
- the present invention is not limited to this. Absent. That is, it is not necessary to adjust the pixel value conversion function based on the histogram of the pixel values of the analysis target area in the split image.
- steps 416, 418, 420, and 422 may be excluded from the image output process shown in FIG.
- the pixel value conversion function is determined based on the spatial frequency characteristic in the boundary region 268 and the maximum value of the histogram of the pixel value.
- the present invention is not limited to this. is not.
- the pixel value conversion function may be determined based on the spatial frequency characteristics of the entire split image and the maximum value of the histogram of pixel values.
- the case where the gradation of the image included in the boundary region 268 is corrected has been described as an example.
- the present invention is not limited to this, and the gradation of the entire split image may be corrected. Good.
- the correction unit 28C performs a process of deriving a pixel value conversion function corresponding to the direction in which the spatial frequency intensity distribution is maximum in step 414 of the image output process shown in FIG.
- the present invention is not limited to this.
- a plurality of pixel value conversion functions corresponding to a plurality of directions for example, the first and second directions
- a plurality of pixel value conversion functions corresponding to a plurality of angle components having a spatial frequency intensity distribution density equal to or higher than a reference value are derived, and the plurality of derived pixel value conversion functions are averaged.
- a process of acquiring the pixel value conversion function may be applied.
- the correction unit 28C determines whether or not there are a plurality of maximum values in step 418 of the image output process shown in FIG. 13, but is there a plurality of minimum values? It may be determined whether or not. In this case, instead of the process of step 422, the determination unit 28D performs a process of adjusting the pixel value conversion function so that the contrast of the pixel value corresponding to the two specific minimum values is increased according to the operation amount. Good.
- a fixed value set as a default is used as the threshold value used in step 406 of the image output process shown in FIG. 13.
- the present invention is not limited to this, and the user can operate via the operation unit 14.
- the indicated value may be adopted.
- a soft key is used as the contrast adjustment operation unit 270.
- a hardware key hard key
- a push-down button is employed as the contrast adjustment operation unit 270.
- the present invention is not limited to this, and a slide-type operation unit may be used.
- the imaging apparatus 100A according to the second embodiment shown in FIGS. 1 to 3 is different from the imaging apparatus 100 described in the first embodiment in that the image processing unit 28 replaces the image output process shown in FIG.
- step 450 is provided between step 416 and step 418.
- step 452, 454, 456, and 458 are provided instead of steps 420, 422, 424, and 426.
- a basic pixel value conversion function is acquired by the correction unit 28C.
- the basic pixel value conversion function refers to, for example, a pixel value conversion function represented by a dashed curve in FIG.
- a default function is adopted as a basic pixel value conversion function.
- the present invention is not limited to this.
- the pixel value conversion function is uniquely determined according to the integral value of the histogram. Also good.
- step 450 the coefficient corresponding to the high saturation occupancy calculated in step 404 is derived from the coefficient derivation function 264 by the correction unit 28C. Then, the correction unit 28C adjusts the basic pixel value conversion function acquired in step 450 by multiplying by the coefficient derived in step 450. After step 450 is performed, the image output process proceeds to step 418.
- the correction unit 28C obtains an adjustment coefficient of a pixel value corresponding to two specific maximum values (hereinafter referred to as “specific two maximum values”) in the histogram generated in step 416.
- the adjustment coefficient refers to a coefficient used for gain adjustment of pixel values corresponding to two specific maximum values, for example.
- the adjustment coefficient is determined in advance for each pixel value and for each difference between the maximum values (for example, the absolute value of the difference), and is uniquely derived by a table or an arithmetic expression.
- the basic pixel value conversion function adjusted in step 450 by the correction unit 28C is performed for each of the output pixel values for the pixel values (input pixel values) corresponding to the two specific maximum values. Adjustment is performed by multiplying each adjustment coefficient acquired in step 452.
- a pixel value conversion function (hereinafter referred to as “adjusted function”) represented by a solid curve shown in FIG. 20 is obtained as an example.
- the gain corresponding to the pixel value a is reduced by 20% by multiplying the output pixel value corresponding to the pixel value a by the adjustment coefficient, and the adjustment coefficient is applied to the output pixel value corresponding to the pixel value b.
- the gain corresponding to the pixel value b increases by 10%.
- step 456 the gradation of the image included in the correction target region is corrected by the correction unit 28C according to the basic pixel value conversion function adjusted in step 450 or the adjusted function obtained in step 456, and then step 458 is performed.
- step 458 the correction unit 28C controls display of the normal image and split image generated in step 402, or the normal image generated in step 402 and the split image in which the gradation of the correction target area is corrected in step 456. Is output to the unit 36.
- the correction unit 28C corrects the gradation of the split image according to the pixel value conversion function determined based on the maximum value of the histogram of the pixel value in the split image.
- the imaging apparatus 100 according to the second embodiment can easily recognize the boundary region 268 included in the split image with a simple configuration as compared with the case where the configuration is not provided.
- the correction unit 28C derives an adjustment coefficient and adjusts the basic pixel value conversion function by multiplying the corresponding output pixel value in the pixel value conversion function by the derived adjustment coefficient.
- the present invention is not limited to this.
- the adjusted function may be uniquely derived from two specific maximum values and pixel values corresponding to these maximum values by a table or an arithmetic expression.
- the pixel value conversion function is determined based on the maximum value of the histogram of the pixel value in the boundary region 268 .
- the present invention is not limited to this.
- the pixel value conversion function may be determined based on the maximum value of the histogram of pixel values in the entire split image.
- a local minimum value may be adopted instead of the local maximum value.
- imaging device 100 (100A) was illustrated, as a portable terminal device which is a modification of imaging device 100 (100A), a mobile phone, a smart phone, etc. which have a camera function are mentioned, for example.
- PDA Personal Digital Assistants
- a portable game machine etc. are mentioned.
- a smartphone will be described as an example, and a detailed description will be given with reference to the drawings.
- FIG. 22 is a perspective view showing an example of the appearance of the smartphone 500.
- a smartphone 500 illustrated in FIG. 22 includes a flat housing 502, and a display input in which a display panel 521 serving as a display unit and an operation panel 522 serving as an input unit are integrated on one surface of the housing 502. Part 520.
- the housing 502 includes a speaker 531, a microphone 532, an operation unit 540, and a camera unit 541.
- the configuration of the housing 502 is not limited thereto, and for example, a configuration in which the display unit and the input unit are independent, or a configuration having a folding structure or a slide structure may be employed.
- FIG. 23 is a block diagram showing an example of the configuration of the smartphone 500 shown in FIG.
- the main components of the smartphone 500 include a wireless communication unit 510, a display input unit 520, a communication unit 530, an operation unit 540, a camera unit 541, a storage unit 550, and an external input / output. Part 560.
- the smartphone 500 includes a GPS (Global Positioning System) receiving unit 570, a motion sensor unit 580, a power supply unit 590, and a main control unit 501.
- GPS Global Positioning System
- a wireless communication function for performing mobile wireless communication via the base station device BS and the mobile communication network NW is provided as a main function of the smartphone 500.
- the wireless communication unit 510 performs wireless communication with the base station apparatus BS accommodated in the mobile communication network NW according to an instruction from the main control unit 501. Using this wireless communication, transmission and reception of various file data such as audio data and image data, e-mail data, and reception of Web data and streaming data are performed.
- the display input unit 520 is a so-called touch panel, and includes a display panel 521 and an operation panel 522. For this reason, the display input unit 520 displays images (still images and moving images), character information, and the like visually by the control of the main control unit 501, and visually transmits information to the user, and user operations on the displayed information Is detected. Note that when viewing the generated 3D, the display panel 521 is preferably a 3D display panel.
- the display panel 521 uses an LCD, OELD (Organic Electro-Luminescence Display), or the like as a display device.
- the operation panel 522 is a device that is placed so that an image displayed on the display surface of the display panel 521 is visible and detects one or a plurality of coordinates operated by a user's finger or stylus. When such a device is operated by a user's finger or stylus, a detection signal generated due to the operation is output to the main control unit 501. Next, the main control unit 501 detects an operation position (coordinates) on the display panel 521 based on the received detection signal.
- the display panel 521 and the operation panel 522 of the smartphone 500 integrally form the display input unit 520, but the operation panel 522 is disposed so as to completely cover the display panel 521. ing.
- the operation panel 522 may have a function of detecting a user operation even in an area outside the display panel 521.
- the operation panel 522 includes a detection area (hereinafter referred to as a display area) for an overlapping portion that overlaps the display panel 521 and a detection area (hereinafter, a non-display area) for an outer edge portion that does not overlap the other display panel 521. May be included).
- the operation panel 522 may include two sensitive regions of the outer edge portion and the other inner portion. Further, the width of the outer edge portion is appropriately designed according to the size of the housing 502 and the like.
- examples of the position detection method employed in the operation panel 522 include a matrix switch method, a resistance film method, a surface acoustic wave method, an infrared method, an electromagnetic induction method, and a capacitance method. You can also
- the communication unit 530 includes a speaker 531 and a microphone 532.
- the communication unit 530 converts the user's voice input through the microphone 532 into voice data that can be processed by the main control unit 501, and outputs the voice data to the main control unit 501. Further, the communication unit 530 decodes the audio data received by the wireless communication unit 510 or the external input / output unit 560 and outputs it from the speaker 531.
- the speaker 531 can be mounted on the same surface as the display input unit 520 and the microphone 532 can be mounted on the side surface of the housing 502.
- the operation unit 540 is a hardware key using a key switch or the like, and receives an instruction from the user.
- the operation unit 540 is mounted on the side surface of the housing 502 of the smartphone 500 and is turned on when pressed with a finger or the like, and is turned off by a restoring force such as a spring when the finger is released. It is a push button type switch.
- the storage unit 550 stores the control program and control data of the main control unit 501, application software, address data that associates the name and telephone number of the communication partner, and transmitted / received e-mail data.
- the storage unit 550 stores Web data downloaded by Web browsing and downloaded content data.
- the storage unit 550 temporarily stores streaming data and the like.
- the storage unit 550 includes an external storage unit 552 having an internal storage unit 551 built in the smartphone and a removable external memory slot.
- Each of the internal storage unit 551 and the external storage unit 552 constituting the storage unit 550 is realized using a storage medium such as a flash memory type (hard memory type) or a hard disk type (hard disk type).
- multimedia card micro type multimedia card micro type
- card type memory for example, MicroSD (registered trademark) memory
- RAM Random Access Memory
- ROM Read Only Memory
- the external input / output unit 560 serves as an interface with all external devices connected to the smartphone 500, and is used to connect directly or indirectly to other external devices through communication or the like or a network. is there. Examples of communication with other external devices include universal serial bus (USB), IEEE 1394, and the like. Examples of the network include the Internet, wireless LAN, Bluetooth (Bluetooth (registered trademark)), RFID (Radio Frequency Identification), and infrared communication (Infrared Data Association: IrDA (registered trademark)). Other examples of the network include UWB (Ultra Wideband (registered trademark)) and ZigBee (registered trademark).
- Examples of the external device connected to the smartphone 500 include a wired / wireless headset, wired / wireless external charger, wired / wireless data port, and a memory card connected via a card socket.
- Other examples of external devices include SIM (Subscriber Identity Module Card) / UIM (User Identity Module Card) cards, and external audio / video devices connected via audio / video I / O (Input / Output) terminals. Can be mentioned.
- an external audio / video device that is wirelessly connected can be used.
- the external input / output unit may transmit data received from such an external device to each component inside the smartphone 500, or may allow data inside the smartphone 500 to be transmitted to the external device. it can.
- the GPS receiving unit 570 receives GPS signals transmitted from the GPS satellites ST1 to STn in accordance with instructions from the main control unit 501, performs positioning calculation processing based on the received plurality of GPS signals, and calculates the latitude of the smartphone 500 Detect the position consisting of longitude and altitude.
- the GPS reception unit 570 can acquire position information from the wireless communication unit 510 or the external input / output unit 560 (for example, a wireless LAN), the GPS reception unit 570 can also detect the position using the position information.
- the motion sensor unit 580 includes a triaxial acceleration sensor, for example, and detects the physical movement of the smartphone 500 in accordance with an instruction from the main control unit 501. By detecting the physical movement of the smartphone 500, the moving direction and acceleration of the smartphone 500 are detected. This detection result is output to the main control unit 501.
- the power supply unit 590 supplies power stored in a battery (not shown) to each unit of the smartphone 500 in accordance with an instruction from the main control unit 501.
- the main control unit 501 includes a microprocessor, operates according to a control program and control data stored in the storage unit 550, and controls each unit of the smartphone 500 in an integrated manner. Further, the main control unit 501 includes a mobile communication control function for controlling each unit of the communication system and an application processing function in order to perform voice communication and data communication through the wireless communication unit 510.
- the application processing function is realized by the main control unit 501 operating according to the application software stored in the storage unit 550.
- Application processing functions include, for example, an infrared communication function that controls the external input / output unit 560 to perform data communication with the opposite device, an e-mail function that transmits and receives e-mails, and a web browsing function that browses web pages. .
- the main control unit 501 has an image processing function such as displaying video on the display input unit 520 based on image data (still image data or moving image data) such as received data or downloaded streaming data.
- the image processing function is a function in which the main control unit 501 decodes the image data, performs image processing on the decoding result, and displays an image on the display input unit 520.
- the main control unit 501 executes display control for the display panel 521 and operation detection control for detecting a user operation through the operation unit 540 and the operation panel 522.
- the main control unit 501 displays an icon for starting application software, a soft key such as a scroll bar, or a window for creating an e-mail.
- a soft key such as a scroll bar
- the scroll bar refers to a soft key for accepting an instruction to move a display portion of an image such as a large image that cannot be accommodated in the display area of the display panel 521.
- the main control unit 501 detects a user operation through the operation unit 540, or accepts an operation on the icon or an input of a character string in the input field of the window through the operation panel 522. Or, by executing the operation detection control, the main control unit 501 accepts a display image scroll request through a scroll bar.
- the main control unit 501 causes the operation position with respect to the operation panel 522 to overlap with the display panel 521 (display area) or other outer edge part (non-display area) that does not overlap with the display panel 21.
- a touch panel control function for controlling the sensitive area of the operation panel 522 and the display position of the soft key is provided.
- the main control unit 501 can also detect a gesture operation on the operation panel 522 and execute a preset function according to the detected gesture operation.
- Gesture operation is not a conventional simple touch operation, but an operation that draws a trajectory with a finger or the like, designates a plurality of positions at the same time, or combines these to draw a trajectory for at least one of a plurality of positions. means.
- the camera unit 541 is a digital camera that captures an image using an image sensor such as a CMOS or a CCD, and has the same function as the image capturing apparatus 100 shown in FIG.
- the camera unit 541 can switch between a manual focus mode and an autofocus mode.
- the photographing lens of the camera unit 541 can be focused by operating a focus icon button or the like displayed on the operation unit 540 or the display input unit 520.
- a live view image obtained by combining the split images is displayed on the display panel 521 so that the in-focus state during manual focus can be confirmed.
- the camera unit 541 converts the image data obtained by imaging into compressed image data such as JPEG (Joint Photographic coding Experts Group) under the control of the main control unit 501.
- the converted image data can be recorded in the storage unit 550 or output through the input / output unit 560 or the wireless communication unit 510.
- the camera unit 541 is mounted on the same surface as the display input unit 520, but the mounting position of the camera unit 541 is not limited to this, and the camera unit 541 may be mounted on the back surface of the display input unit 520.
- a plurality of camera units 541 may be mounted. Note that when a plurality of camera units 541 are mounted, the camera unit 541 used for imaging may be switched and imaged alone, or a plurality of camera units 541 may be used simultaneously for imaging. it can.
- the camera unit 541 can be used for various functions of the smartphone 500.
- an image acquired by the camera unit 541 can be displayed on the display panel 521, or the image of the camera unit 541 can be used as one of operation inputs of the operation panel 522.
- the GPS receiving unit 570 detects the position, the position can also be detected with reference to an image from the camera unit 541.
- the optical axis direction of the camera unit 541 of the smartphone 500 is determined without using the triaxial acceleration sensor or in combination with the triaxial acceleration sensor. It is also possible to determine the current usage environment.
- the image from the camera unit 541 can be used in the application software.
- various information can be added to still image or moving image data and recorded in the storage unit 550 or output through the input / output unit 560 or the wireless communication unit 510.
- the “various information” herein include, for example, position information acquired by the GPS receiving unit 570 and image information of the still image or moving image, audio information acquired by the microphone 532 (sound text conversion by the main control unit or the like). May be text information).
- posture information acquired by the motion sensor unit 580 may be used.
- the split image divided into two in the vertical direction is illustrated, but the present invention is not limited to this, and an image divided into a plurality of parts in the horizontal direction or the diagonal direction may be applied as the split image.
- the split image 66a shown in FIG. 24 is divided into odd lines and even lines by a plurality of dividing lines 63a parallel to the row direction.
- a line-like (eg, strip-like) phase difference image 66La generated based on the output signal outputted from the first pixel group is displayed on an odd line (even an even line is acceptable).
- a line-shaped (eg, strip-shaped) phase difference image 66Ra generated based on the output signal output from the second pixel group is displayed on even lines.
- the split image 66b shown in FIG. 25 is divided into two by a dividing line 63b (for example, a diagonal line of the split image 66b) having an inclination angle in the row direction.
- the phase difference image 66Lb generated based on the output signal output from the first pixel group is displayed in one area.
- the phase difference image 66Rb generated based on the output signal output from the second pixel group is displayed in the other region.
- the split image 66c shown in FIGS. 26A and 26B is divided by grid-like dividing lines 63c parallel to the row direction and the column direction, respectively.
- the phase difference image 66Lc generated based on the output signal output from the first pixel group is displayed in a checkered pattern (checker pattern).
- the phase difference image 66Rc generated based on the output signal output from the second pixel group is displayed in a checkered pattern.
- another in-focus confirmation image may be generated from the two phase difference images, and the in-focus confirmation image may be displayed.
- two phase difference images may be superimposed and displayed as a composite image. If the image is out of focus, the image may be displayed as a double image, and the image may be clearly displayed when the image is in focus.
- the imaging element 20 having the first to third pixel groups is illustrated, but the present invention is not limited to this, and only the first pixel group and the second pixel group.
- An image sensor made of A digital camera having this type of image sensor generates a three-dimensional image (3D image) based on the first image output from the first pixel group and the second image output from the second pixel group. 2D images (2D images) can also be generated.
- the generation of the two-dimensional image is realized, for example, by performing an interpolation process between pixels of the same color in the first image and the second image.
- the display control unit 36 may be configured to suppress continuous display as a moving image of a normal image on the display device, and to perform control for continuously displaying a split image as a moving image on the display device.
- “suppressing the display of a normal image” refers to not displaying a normal image on a display device, for example.
- split image refers to an image output from a phase difference image group (for example, a first image and a second image output from a first pixel group when a specific imaging device is used). A split image based on the second image output from the pixel group can be exemplified.
- Examples of “when using a specific image sensor” include a case where an image sensor consisting of only a phase difference pixel group (for example, a first pixel group and a second pixel group) is used. In addition to this, a case where an image sensor in which phase difference pixels (for example, a first pixel group and a second pixel group) are arranged at a predetermined ratio with respect to a normal pixel can be exemplified.
- various conditions are conceivable as conditions for suppressing the normal image display and displaying the split image.
- the display control unit 36 performs control to display the split image without displaying the normal image on the display device. It may be. Further, for example, when the photographer looks into the hybrid viewfinder, the display control unit 36 may perform control to display the split image without displaying the normal image on the display device. Further, for example, when the release button 211 is pressed halfway, the display control unit 36 may perform control to display the split image without displaying the normal image on the display device.
- the display control unit 36 may perform control to display the split image without displaying the normal image on the display device.
- the display control unit 36 may perform control to display the split image without displaying the normal image on the display device.
- the display control unit 36 suppresses the display of the normal image.
- the present invention is not limited to this.
- the display control unit 36 displays the split image of the full screen over the normal image. You may control.
- the imaging apparatus 100 (100A) described in each of the above embodiments may have a function of confirming the depth of field (depth of field confirmation function).
- the imaging apparatus 100 has a depth-of-field confirmation key.
- the depth-of-field confirmation key may be a hard key or a soft key.
- a momentary operation type switch non-holding type switch
- the momentary operation type switch mentioned here refers to a switch that maintains a specific operation state in the imaging apparatus 100 only while being pushed into a predetermined position, for example.
- the aperture value is changed.
- the aperture value continues to change until reaching the limit value.
- the aperture value changes, and thus there may be a case where the phase difference necessary for obtaining the split image cannot be obtained.
- the split image may be changed to the normal live view display while the split image is being pressed.
- the CPU 12 may switch the screen so that the split image is displayed again when the pressed state is released.
- a momentary operation type switch is applied as an example of the depth of field confirmation key.
- the present invention is not limited to this, and an alternative operation type switch (holding type switch) may be applied.
- each process included in the image output process described in each of the above embodiments may be realized by a software configuration using a computer by executing a program, or may be realized by other hardware configurations. Also good. Further, it may be realized by a combination of a hardware configuration and a software configuration.
- the program may be stored in a predetermined storage area (for example, the memory 26) in advance. It is not always necessary to store in the memory 26 from the beginning.
- a program is first stored in an arbitrary “portable storage medium” such as an SSD (Solid State Drive), a CD-ROM, a DVD disk, a magneto-optical disk, or an IC card that is connected to a computer. May be. Then, the computer may acquire the program from these portable storage media and execute it.
- each program may be stored in another computer or server device connected to the computer via the Internet, LAN (Local Area Network), etc., and the computer may acquire and execute the program from these. Good.
- imaging lens 20 imaging device 26 memory 28A image acquisition unit 28B generation unit 28C correction unit 28D determination unit 36A, 36B display control unit 100, 100A imaging device 213 display unit 247 LCD
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Studio Devices (AREA)
- Automatic Focus Adjustment (AREA)
- Indication In Cameras, And Counting Of Exposures (AREA)
- Focusing (AREA)
- Image Processing (AREA)
Abstract
Description
図1は、第1実施形態に係る撮像装置100の外観の一例を示す斜視図であり、図2は、図1に示す撮像装置100の背面図である。
上記第1実施形態では、空間周波数の強度分布の方向に基づいて定まる画素値変換関数を用いて階調補正を行う場合を例示したが、本第2実施形態ではヒストグラムの極大値に基づいて定まる画素値変換関数を用いて階調補正を行う場合について説明する。なお、以下では、上記第1実施形態と同一の構成については、同一の符号を付し、説明を省略する。
上記各実施形態では、撮像装置100(100A)を例示したが、撮像装置100(100A)の変形例である携帯端末装置としては、例えばカメラ機能を有する携帯電話機やスマートフォンなどが挙げられる。この他にも、PDA(Personal Digital Assistants)や携帯型ゲーム機などが挙げられる。本第3実施形態では、スマートフォンを例に挙げ、図面を参照しつつ、詳細に説明する。
20 撮像素子
26 メモリ
28A 画像取得部
28B 生成部
28C 補正部
28D 決定部
36A,36B 表示制御部
100,100A 撮像装置
213 表示部
247 LCD
Claims (16)
- 撮影レンズにおける第1及び第2の領域を通過した被写体像が瞳分割されてそれぞれ結像されることにより第1及び第2の画像信号を出力する第1及び第2の画素群を有する撮像素子から出力された前記第1及び第2の画像信号に基づく第1及び第2の画像を取得する画像取得部と、
前記撮像素子から出力された画像信号に基づいて第1の表示用画像を生成し、且つ、前記第1及び第2の画像に基づいて合焦確認に使用する第2の表示用画像を生成する生成部と、
前記第2の表示用画像における空間周波数特性及び前記第2の表示用画像における画素値のヒストグラムの極値の少なくとも一方に基づいて定まる階調補正情報に従って前記第2の表示用画像の階調を補正する補正部と、
画像を表示する表示部と、
前記表示部に対して、前記生成部により生成された前記第1の表示用画像を表示させ、且つ、前記第1の表示用画像の表示領域内に、前記補正部により階調が補正された前記第2の表示用画像を表示させる制御を行う表示制御部と、
を含む画像処理装置。 - 撮影レンズにおける第1及び第2の領域を通過した被写体像が瞳分割されてそれぞれ結像されることにより第1及び第2の画像信号を出力する第1及び第2の画素群を有する撮像素子から出力された前記第1及び第2の画像信号に基づく第1及び第2の画像を取得する画像取得部と、
前記撮像素子から出力された画像信号に基づいて第1の表示用画像を生成し、且つ、前記第1及び第2の画像に基づいて合焦確認に使用する第2の表示用画像を生成する生成部と、
前記第2の表示用画像における空間周波数特性及び前記第2の表示用画像における画素値のヒストグラムの極値の少なくとも一方に基づいて定まる階調補正情報に従って前記第2の表示用画像の階調を補正する補正部と、
画像を表示する表示部と、
前記表示部に対して、前記補正部により階調が補正された前記第2の表示用画像を表示させる制御を行う表示制御部と、
を含む画像処理装置。 - 前記空間周波数特性は、前記第2の表示用画像における空間周波数強度が最大の方向である請求項1又は請求項2に記載の画像処理装置。
- 前記階調補正係数は、前記空間周波数強度が最大の方向と前記第1の画像及び前記第2の画像に基づく視差方向との一致度の低下に応じて、所定値未満の輝度の画像領域のコントラストを大きくする階調補正係数である請求項3に記載の画像処理装置。
- 前記第2の表示用画像は、前記第1の画像に含まれる第1画像領域と前記第2の画像に含まれる第2画像領域とを瞳分割方向に交差する方向に沿って組み合わせた画像を含み、
前記階調補正情報は、前記第2の表示用画像に含まれる前記第1画像領域と前記第2画像領域との境界を含む第1境界領域における前記空間周波数特性及び前記第2の表示用画像における画素値のヒストグラムの極値の少なくとも一方に基づいて定まる請求項1から請求項4の何れか1項に記載の画像処理装置。 - 前記補正部は、前記階調補正情報に従って前記第1境界領域の階調を補正する請求項5に記載の画像処理装置。
- 前記第2の表示用画像は、前記第1の画像に含まれる第1画像領域と前記第2の画像に含まれる第2画像領域とを瞳分割方向に交差する方向に沿って組み合わせた画像を含み、
前記補正部は、前記階調補正情報に従って、前記第2の表示用画像に含まれる前記第1画像領域と前記第2画像領域との境界を含む第1境界領域の階調を補正する請求項1から請求項4の何れか1項に記載の画像処理装置。 - 前記ヒストグラムに基づいて定まる前記階調補正情報の内容は、前記ヒストグラムが複数の極大値を有する場合、前記複数の極大値のうちの特定の一対の極大値に対応する画素間のコントラストを補正前のコントラストよりも大きくする内容である請求項1から請求項7の何れか1項に記載の画像処理装置。
- 前記コントラストを、与えられた指示に応じて決定する決定部を更に含む請求項8に記載の画像処理装置。
- 前記補正部は、前記第1の表示用画像と前記第2の表示用画像との境界に隣接する前記第1の表示用画像側の第2境界領域における所定値以上の彩度を有する画素の占有率が閾値未満の場合に前記階調補正情報に従って前記第2の表示用画像の階調を補正する請求項1から請求項9の何れか1項に記載の画像処理装置。
- 前記補正部は、前記占有率に応じて定まる調整値を用いて前記階調補正情報を調整し、調整後の前記階調補正情報に従って前記第2の表示用画像の階調を補正する請求項10に記載の画像処理装置。
- 前記撮像素子は、前記撮影レンズを透過した被写体像が瞳分割されずに結像されて第3の画像信号を出力する第3の画素群を更に有し、
前記生成部は、前記第3の画素群から出力された前記第3の画像信号に基づいて前記第1の表示用画像を生成する請求項1から請求項11の何れか1項に記載の画像処理装置。 - 請求項1から請求項12の何れか1項に記載の画像処理装置と、
前記第1及び第2の画素群を有する撮像素子と、
前記撮像素子から出力された画像信号に基づいて生成された画像を記憶する記憶部と、
を含む撮像装置。 - 撮影レンズにおける第1及び第2の領域を通過した被写体像が瞳分割されてそれぞれ結像されることにより第1及び第2の画像信号を出力する第1及び第2の画素群を有する撮像素子から出力された前記第1及び第2の画像信号に基づく第1及び第2の画像を取得し、
前記撮像素子から出力された画像信号に基づいて第1の表示用画像を生成し、
前記第1及び第2の画像に基づいて合焦確認に使用する第2の表示用画像を生成し、
前記第2の表示用画像における空間周波数特性及び前記第2の表示用画像における画素値のヒストグラムの極値の少なくとも一方に基づいて定まる階調補正情報に従って前記第2の表示用画像の階調を補正し、
画像を表示する表示部に対して、生成した前記第1の表示用画像を表示させ、且つ、前記第1の表示用画像の表示領域内に、階調を補正した前記第2の表示用画像を表示させる制御を行うことを含む画像処理方法。 - 撮影レンズにおける第1及び第2の領域を通過した被写体像が瞳分割されてそれぞれ結像されることにより第1及び第2の画像信号を出力する第1及び第2の画素群を有する撮像素子から出力された前記第1及び第2の画像信号に基づく第1及び第2の画像を取得し、
前記撮像素子から出力された画像信号に基づいて第1の表示用画像を生成し、
前記第1及び第2の画像に基づいて合焦確認に使用する第2の表示用画像を生成し、
前記第2の表示用画像における空間周波数特性及び前記第2の表示用画像における画素値のヒストグラムの極値の少なくとも一方に基づいて定まる階調補正情報に従って前記第2の表示用画像の階調を補正し、
前記表示部に対して、階調を補正した前記第2の表示用画像を表示させる制御を行うことを含む画像処理方法。 - コンピュータを、
請求項1から請求項12の何れか1項に記載の画像処理装置における前記画像取得部、前記生成部、前記補正部及び前記表示制御部として機能させるための画像処理プログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015507944A JP6086975B2 (ja) | 2013-03-27 | 2013-11-08 | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム |
CN201380075101.9A CN105075235B (zh) | 2013-03-27 | 2013-11-08 | 图像处理装置、摄像装置以及图像处理方法 |
US14/861,623 US9485410B2 (en) | 2013-03-27 | 2015-09-22 | Image processing device, imaging device, image processing method and computer readable medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013066567 | 2013-03-27 | ||
JP2013-066567 | 2013-03-27 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/861,623 Continuation US9485410B2 (en) | 2013-03-27 | 2015-09-22 | Image processing device, imaging device, image processing method and computer readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014155812A1 true WO2014155812A1 (ja) | 2014-10-02 |
Family
ID=51622849
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/080322 WO2014155812A1 (ja) | 2013-03-27 | 2013-11-08 | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US9485410B2 (ja) |
JP (1) | JP6086975B2 (ja) |
CN (1) | CN105075235B (ja) |
WO (1) | WO2014155812A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021016103A (ja) * | 2019-07-12 | 2021-02-12 | 株式会社東海理化電機製作所 | 画像処理装置、画像処理プログラム |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5753321B2 (ja) * | 2012-09-19 | 2015-07-22 | 富士フイルム株式会社 | 撮像装置及び合焦確認表示方法 |
CN105026976B (zh) * | 2013-03-29 | 2017-05-03 | 富士胶片株式会社 | 图像处理装置、摄像装置及图像处理方法 |
JP6188448B2 (ja) * | 2013-06-26 | 2017-08-30 | オリンパス株式会社 | 画像処理装置 |
JP2016197145A (ja) * | 2015-04-02 | 2016-11-24 | 株式会社東芝 | 画像処理装置および画像表示装置 |
CN108289592A (zh) * | 2015-12-01 | 2018-07-17 | 奥林巴斯株式会社 | 摄像装置、内窥镜装置以及摄像方法 |
US10771672B2 (en) * | 2016-08-02 | 2020-09-08 | Fuji Corporation | Detachable-head-type camera and work machine |
JP6620079B2 (ja) * | 2016-09-08 | 2019-12-11 | 株式会社ソニー・インタラクティブエンタテインメント | 画像処理システム、画像処理方法およびコンピュータプログラム |
JP6889774B2 (ja) * | 2017-04-20 | 2021-06-18 | シャープ株式会社 | 画像処理装置、撮像装置、画像印刷装置、画像処理装置の制御方法、および画像処理プログラム |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002023245A (ja) * | 2000-07-07 | 2002-01-23 | Olympus Optical Co Ltd | カメラ |
JP2009163220A (ja) * | 2007-12-14 | 2009-07-23 | Canon Inc | 撮像装置 |
JP2009276426A (ja) * | 2008-05-13 | 2009-11-26 | Canon Inc | 撮像装置 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4289225B2 (ja) * | 2004-06-18 | 2009-07-01 | 株式会社ニコン | 画像処理装置および画像処理方法 |
JP4934326B2 (ja) | 2005-09-29 | 2012-05-16 | 富士フイルム株式会社 | 画像処理装置およびその処理方法 |
JP5211521B2 (ja) * | 2007-03-26 | 2013-06-12 | 株式会社ニコン | 画像処理装置、画像処理方法、画像処理プログラム、およびカメラ |
JP5043626B2 (ja) | 2007-12-13 | 2012-10-10 | キヤノン株式会社 | 撮像装置 |
JP5261796B2 (ja) * | 2008-02-05 | 2013-08-14 | 富士フイルム株式会社 | 撮像装置、撮像方法、画像処理装置、画像処理方法、およびプログラム |
JP5200955B2 (ja) * | 2008-02-14 | 2013-06-05 | 株式会社ニコン | 画像処理装置、撮像装置及び画像処理プログラム |
JP5050928B2 (ja) * | 2008-02-28 | 2012-10-17 | ソニー株式会社 | 撮像装置および撮像素子 |
JP5317562B2 (ja) * | 2008-07-17 | 2013-10-16 | キヤノン株式会社 | 位相差検出装置、撮像装置、位相差検出方法、位相差検出プログラム |
JP5300414B2 (ja) * | 2008-10-30 | 2013-09-25 | キヤノン株式会社 | カメラ及びカメラシステム |
JP5563283B2 (ja) * | 2009-12-09 | 2014-07-30 | キヤノン株式会社 | 画像処理装置 |
JP5642433B2 (ja) | 2010-06-15 | 2014-12-17 | 富士フイルム株式会社 | 撮像装置及び画像処理方法 |
JP2012003080A (ja) * | 2010-06-17 | 2012-01-05 | Olympus Corp | 撮像装置 |
WO2012073728A1 (ja) * | 2010-11-30 | 2012-06-07 | 富士フイルム株式会社 | 撮像装置及びその合焦位置検出方法 |
JP5954964B2 (ja) * | 2011-02-18 | 2016-07-20 | キヤノン株式会社 | 撮像装置およびその制御方法 |
JP2012182332A (ja) * | 2011-03-02 | 2012-09-20 | Sony Corp | 撮像素子および撮像装置 |
JP5404693B2 (ja) * | 2011-05-18 | 2014-02-05 | キヤノン株式会社 | 撮像素子、それを具備した撮像装置及びカメラシステム |
CN104521231B (zh) * | 2012-08-10 | 2016-12-21 | 株式会社尼康 | 图像处理装置和摄像装置 |
-
2013
- 2013-11-08 WO PCT/JP2013/080322 patent/WO2014155812A1/ja active Application Filing
- 2013-11-08 CN CN201380075101.9A patent/CN105075235B/zh active Active
- 2013-11-08 JP JP2015507944A patent/JP6086975B2/ja active Active
-
2015
- 2015-09-22 US US14/861,623 patent/US9485410B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002023245A (ja) * | 2000-07-07 | 2002-01-23 | Olympus Optical Co Ltd | カメラ |
JP2009163220A (ja) * | 2007-12-14 | 2009-07-23 | Canon Inc | 撮像装置 |
JP2009276426A (ja) * | 2008-05-13 | 2009-11-26 | Canon Inc | 撮像装置 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021016103A (ja) * | 2019-07-12 | 2021-02-12 | 株式会社東海理化電機製作所 | 画像処理装置、画像処理プログラム |
Also Published As
Publication number | Publication date |
---|---|
CN105075235B (zh) | 2018-07-06 |
JPWO2014155812A1 (ja) | 2017-02-16 |
US9485410B2 (en) | 2016-11-01 |
US20160028940A1 (en) | 2016-01-28 |
JP6086975B2 (ja) | 2017-03-01 |
CN105075235A (zh) | 2015-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6033454B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP6086975B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5681329B2 (ja) | 撮像装置及び画像表示方法 | |
JP5889441B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5931206B2 (ja) | 画像処理装置、撮像装置、プログラム及び画像処理方法 | |
JP5960286B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP6158340B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5901782B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP6000446B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5901781B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5901801B2 (ja) | 画像処理装置、撮像装置、プログラム及び画像処理方法 | |
JP5833254B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5753323B2 (ja) | 撮像装置及び画像表示方法 | |
WO2014106916A1 (ja) | 画像処理装置、撮像装置、プログラム及び画像処理方法 | |
JP5972485B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5901780B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5934844B2 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
WO2014045741A1 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201380075101.9 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13880449 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015507944 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13880449 Country of ref document: EP Kind code of ref document: A1 |