CN101426085B - Imaging arrangements and methods therefor - Google Patents
Imaging arrangements and methods therefor Download PDFInfo
- Publication number
- CN101426085B CN101426085B CN2008101691410A CN200810169141A CN101426085B CN 101426085 B CN101426085 B CN 101426085B CN 2008101691410 A CN2008101691410 A CN 2008101691410A CN 200810169141 A CN200810169141 A CN 200810169141A CN 101426085 B CN101426085 B CN 101426085B
- Authority
- CN
- China
- Prior art keywords
- light
- image
- example embodiment
- main lens
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 68
- 238000000034 method Methods 0.000 title description 111
- 230000003287 optical effect Effects 0.000 claims abstract description 53
- 230000004075 alteration Effects 0.000 claims description 21
- 238000009826 distribution Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 abstract description 11
- 238000013459 approach Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 43
- 230000008569 process Effects 0.000 description 32
- 238000012545 processing Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 16
- 238000005070 sampling Methods 0.000 description 14
- 238000001914 filtration Methods 0.000 description 11
- 230000005540 biological transmission Effects 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 7
- 230000002950 deficient Effects 0.000 description 7
- 230000005855 radiation Effects 0.000 description 7
- 230000008878 coupling Effects 0.000 description 6
- 238000010168 coupling process Methods 0.000 description 6
- 238000005859 coupling reaction Methods 0.000 description 6
- 239000000284 extract Substances 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 238000000205 computational method Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 238000007689 inspection Methods 0.000 description 4
- 230000007935 neutral effect Effects 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 4
- 238000013500 data storage Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000010354 integration Effects 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000012952 Resampling Methods 0.000 description 2
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000021615 conjugation Effects 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000002203 pretreatment Methods 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 238000009966 trimming Methods 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 206010010071 Coma Diseases 0.000 description 1
- 241001269238 Data Species 0.000 description 1
- 206010047571 Visual impairment Diseases 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000004224 protection Effects 0.000 description 1
- 230000003362 replicative effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Landscapes
- Studio Devices (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Image Input (AREA)
Abstract
Image data is processed to facilitate focusing and/or optical correction. According to an example embodiment of the present invention, an imaging arrangement collects light data corresponding to light passing through a particular focal plane. The light data is collected using an approach that facilitates the determination of the direction from which various portions of the light incident upon a portion of the focal plane emanate from. Using this directional information in connection with value of the light as detected by photosensors, an image represented by the light is selectively focused and/or corrected.
Description
Patent application of the present invention is that international application no is PCT/US2005/035189; International filing date is on September 30th, 2005; The application number that gets into the China national stage is 200580039822.X, and name is called the dividing an application of application for a patent for invention of " imaging device and method thereof ".
The related patent U.S. Patent No. file
This patent file is according to 35U.S.C. § 119 (e); The sequence number that requirement was submitted on October 1st, 2004 is 60/615; The sequence number that 179 U.S. Provisional Patent Application and on January 27th, 2005 submit to is 60/647; The priority of 492 U.S. Provisional Patent Application, both by reference integral body be incorporated into this.
Technical field
The present invention relates generally to imaging applications, relate in particular to image data processing to focus on and/or image correcting data.
Background technology
Usually be subject to the amount of the light of being gathered such as the imaging applications that relates to camera, video camera, microscope, telescope etc.That is, most of imaging devices do not write down about getting into the photodistributed most information of this device.For example, the traditional camera such as digital still camera and video camera does not write down the photodistributed most information that gets into about from the external world.In these devices, the light of being gathered processings that usually can not in all sorts of ways is such as focusing on different depth (apart from the distance of imaging device), correcting lens aberration or controlling the visual angle.
Use for quiescent imaging, the typical imaging device of catching special scenes focuses on target or object in the scene usually, and other part of this scene drops on outside the focus.Use for video imaging, have similar problem, the IMAQ that wherein uses in the Video Applications is capturing scenes in focus.
Many imaging applications are used to gather the aberration of the device (lens) of light.This aberration can comprise for example spherical aberration, aberration, distortion, light field bending, oblique astigmation and coma aberration.Correction to aberration relates generally to use the calibrating optical parts, is easy to this moment add volume, expenditure and weight to imaging device.Having benefited from the application of miniature optical components such as some of camera phone and security camera, the physical restriction that is associated with application does not expect to comprise additional optical.
With the above difficulty that is associated to comprising that the image applications of obtaining and changing that relates to digital picture has proposed challenge.
Summary of the invention
The present invention relates to overcome the above-mentioned challenge and other challenge relevant with imaging device and realization thereof.The present invention is illustration in many realizations with in using, and wherein a part will be summarized hereinafter.
According to an example embodiment of the present invention, detect this light with the directional information that characterizes detected light.Corresponding to one of focusedimage and calibrated image again or both, directional information is used for producing the virtual image with detected light.
According to another example embodiment of the present invention, two or more targets of different depths of focus in scene are carried out to picture, wherein the part corresponding to each target with scene is imaged on different focal planes.On a physics focal plane, focus on and to be detected with the information that characterizes direction from the light of scene, wherein this light arrives the ad-hoc location on physics focal plane from this direction.For at least one target that is positioned at the depth of field that does not focus on the physics focal plane, confirm the empty focal plane different with the physics focal plane.Use detected light and directional characteristic thereof, the part corresponding to the focusedimage of at least one target on the empty focal plane of light is gathered and is added up to form the empty focusedimage of this at least one target.
The example embodiment again according to the present invention is carried out digital imagery to scene.The light from scene of diverse location detects on the focal plane to being transferred to, and the incidence angle of the detected light of diverse location on the focal plane is detected.The depth of field of scene that detected light was derived from part is to be detected, and is used for that with determined incidence angle detected light is carried out digitlization and arranges again.Depend on application, arrangement comprises focusing and/or correcting lens aberration more again.
Above conclusion of the present invention is not intended to describe of the present invention each execution mode or each realization is shown.The accompanying drawing of hereinafter and these execution modes of having described more specifically illustration in detail.
Description of drawings
Consider the detailed description of hereinafter relevant each execution mode of the present invention, can more thoroughly understand the present invention with accompanying drawing, in the accompanying drawings:
Fig. 1 is that the light of one example embodiment according to the present invention is caught and processing unit;
Fig. 2 is the optical imaging device of another example embodiment according to the present invention;
Fig. 3 is the process flow diagram of the image processing of another example embodiment according to the present invention;
Fig. 4 is the process flow diagram that is used to produce preview image of another example embodiment according to the present invention;
Fig. 5 be according to the present invention another example embodiment be used to handle the process flow diagram with compressing image data;
Fig. 6 be according to the present invention another example embodiment be used for the synthetic process flow diagram of image;
Fig. 7 be according to the present invention another example embodiment be used for the process flow diagram that image focuses on again;
Fig. 8 is the process flow diagram that is used for the extension bitmap picture depth of field of another example embodiment according to the present invention;
Fig. 9 is the process flow diagram of another method that is used for the extension bitmap picture depth of field of another example embodiment according to the present invention;
Figure 10 illustrates an exemplary method of the separation light of another example embodiment according to the present invention;
Figure 11 illustrate according to the present invention another example embodiment with the sensor pixel location map to L (u, v, s, t) method of light in the space with respect to institute's image data;
Figure 12 illustrates the some images that focus on different depth again of another example embodiment according to the present invention;
Figure 13 A illustrates the 2D imaging configuration of the another example embodiment according to the present invention;
The total that Figure 13 B illustrates another example embodiment according to the present invention becomes the pencil of ordering from a 3D of a pixel;
The calculating that Figure 14 A-14C illustrates the another example embodiment according to the present invention has the method for the image of the different depth of field;
The method of the light that the tracking that Figure 15 illustrates another example embodiment according to the present invention is ordered from a 3D on the empty film plane;
Figure 16 illustrates the method for the value that obtains light of another example embodiment according to the present invention;
Figure 17 A illustrates the desirable 512x512 photograph of another example embodiment according to the present invention;
Figure 17 B illustrates the image with the generation of f/2 biconvex spherical lens of the another example embodiment according to the present invention;
Figure 17 C illustrates the image of the use method for correcting image calculating of another example embodiment according to the present invention;
Figure 18 A-18C illustrates shared light in the tracking color imaging system of the another example embodiment according to the present invention;
Figure 19 A-19F illustrates the method for the realization mosaic array of another example embodiment according to the present invention;
Figure 20 is the process flow diagram of the computational methods that another example embodiment focuses in Fourier domain again according to the present invention;
Figure 21 A illustrates the triangle filtering method of another example embodiment according to the present invention;
Figure 21 B illustrates the Fourier transform of the triangle filtering method of the another example embodiment according to the present invention;
Figure 22 is the flow chart of method of focusing again in frequency domain of another example embodiment according to the present invention;
Figure 23 illustrates the light set of passing through the expectation focus of the another example embodiment according to the present invention;
Figure 24 A-B illustrates the different views of the part of the microlens array of another example embodiment according to the present invention;
Figure 24 C illustrates the image that on optical sensor, occurs of the another example embodiment according to the present invention;
Figure 25 illustrates the present invention's one example embodiment, and wherein the virtual image is calculated as just as it appears on the empty film;
Figure 26 illustrates the method on the empty lens of the manipulation plane of another example embodiment according to the present invention;
Figure 27 illustrates that the empty film of another example embodiment can adopt Any shape according to the present invention;
Figure 28 illustrates the imaging device of another example embodiment according to the present invention;
Figure 29 is the process flow diagram of the weight database that is associated of precomputation and each output image pixel and each light sensor value of the another example embodiment according to the present invention;
Figure 30 is the process flow diagram that the use weight database of another example embodiment according to the present invention is calculated output image;
Figure 31 A-D illustrates the various scalar functions as the realization of vignette circle function selectivity of the another example embodiment according to the present invention;
Figure 32 illustrates the vignette circle function of the individual element variation of another example embodiment according to the present invention;
Figure 33 is the user of the another example embodiment according to a present invention zone selecting output image, editor's one image section and the process flow diagram of preserving output image;
Figure 34 is the flow chart that is used for the extension bitmap picture depth of field of another example embodiment according to the present invention; And
Figure 35 is being used for according to the light sensor data computation that receives through the process flow diagram of focusedimage again of another example embodiment according to the present invention.
Though the present invention is easy to have various variants and other form, its details is shown in the drawings and will describe in detail through example.Yet, should be appreciated that, be not to be intended to the present invention is limited to described specific implementations.On the contrary, all variants, equivalents and the alternative that drops in the spirit and scope of the present invention contained in the present invention.
Describe in detail
Believe that the present invention is useful to various dissimilar devices, and find that the present invention is particularly useful for electronic imaging apparatus and application.Though the present invention needn't be subject to this application, through using context various examples are discussed, be appreciated that various aspects of the present invention.
According to an example embodiment of the present invention, use the amount relate to the light of confirming to arrive the transducer that is arranged in the focal plane and the method for direction to detect the four-dimension (4D) light field (for example every light ray propagation in zone of edge such as free space).Two-dimensional position to light in the focal plane detects with the information that characterizes direction, and wherein this light arrives the ad-hoc location on the plane from this direction.Use this method, the directional lighting that arrives diverse location on the transducer distributes and is determined and is used to form image.In the various discussion of this paper, realize that the assembly or a plurality of assembly that are used for sensing and/or measure light field are called as " light sensor " or " radiation transducers ".
In one uses; Use has optics that the ray space that incides on the imaging plane is sampled and the imaging system of transducer realizes and the similar method of preceding text, and this imaging system has with distinct methods from measuring the computing function that the light set presents image.Depend on realization, the combination or use diverse ways to realize optics, transducer and computing function individually.For example, have image focusing can be used for ray space is sampled at the camera that is positioned at the lens (optics) on the photosensor array of imaging plane (a plurality of transducer).Be used for appearing image from the output of photosensor array with computing function (for example on the inner and/or outside processor of camera); Such as the photograph that focuses on different depth or have the different depth of field through calculating, and/or through the calculation correction lens aberration to produce higher-quality image.
In another example embodiment, the optics of imaging system and sensor element on sensor element, make each sensor element sensing comprise the light set of the light of dispersing from specific direction light conduction.In many application, this light set is the Ray Of Light of localization on space and direction.For many application, this bundle light increases along with optics and sensor resolution and pools single how much light.Like this, difference among this paper is described part and will be called " light ray " or " light " or " ray " simply by the value of sensor element sensing, even they are not limited to light how much usually.
Referring now to accompanying drawing, Figure 28 illustrates the imaging device 2890 of another example embodiment according to the present invention.This imaging device 2890 comprises main lens 2810, and the light sensor to arriving diverse location on the transducer and measuring from the value of the light of different incident directions.In the context of this paper, light that can be through detect arriving diverse location on the transducer and produce numerical value such as the characteristic of the light of intensity is to realize the measurement to the value of light.
Fig. 1 illustrates the imaging system 100 of another execution mode according to the present invention.This imaging system 100 comprise have main lens 110, the imaging device 190 of microlens array 120 and photosensor array 130.In this situation, microlens array 120 is realized a light sensor with photosensor array 130.Though Fig. 1 illustrates a specific main lens 110 (discrete component) and a specific microlens array 120; But those skilled in the art are to be appreciated that the available similar method of various lens and/or microlens array (existing at present or exploitation soon) optionally realizes through for example replacing shown main lens and/or microlens array.
From in the imaging scene on the object 105 light of a single point can arrive the single focus point on the focal plane of microlens array 120.For example, when imaging point on the object 105 apart from main lens with from the microlens array to the main lens apart from the distance of conjugation the time, distance ' ' d ' ' as shown in the figure approximates distance " s ".Lenticule 122 on this convergent point separates these light according to direction of light, thereby on the optical sensor under the lenticule, produces the focusedimage of the aperture of main lens 110.
130 pairs of light that incide on it of photosensor array detect and produce the output of handling through one or more different parts.In this application, output light data are passed to sensing data treatment circuit 140, and this circuit uses these data and produces the scene image of (for example comprising object 105,106 and 107) about the positional information of each optical sensor that data are provided.Sensing data treatment circuit 140 usefulness for example computer or other treatment circuit of in common component (for example chip) or different parts, realizing realize.In one realized, the part of sensing data treatment circuit 140 realized in imaging device 190, and another part is externally realized in the computer.Use the known incident direction (calculating like the known location of using each optical sensor) that detects light (and the characteristic that for example detects light) and arrive the light of microlens array, sensing data treatment circuit 140 optionally focuses on and/or proofreaies and correct light data (recoverable wherein forms images) again when forming image.Following reference or do not describe the whole bag of tricks of handling the detected light data in detail with reference to other accompanying drawing.These methods can be used with above consistent sensing data treatment circuit 140 and optionally realize.
The different piece of imaging system 100 depends on application-specific and in shared or independent physical unit, optionally realizes.For example, when with various Application and implementation, microlens array 120 and photosensor array 130 be capable of being combined to become a shared device 180.In some applications, microlens array 120 is coupled on co-used chip or other circuit arrangement with photosensor array 130.When using when realizing such as the handheld apparatus of similar camera apparatus, main lens 110, microlens array 120 and photosensor array 130 optionally be combined into the integrated shared imaging device 190 of handheld apparatus in.In addition, some application relate in having the common circuit device of photosensor array 130 (for example on co-used chip) realize sensing data treatment circuit 140 partly or entirely.
In some applications, imaging device 100 comprises the pre-viewing device 150 that is used for appearing to the user who catches image preview image.This pre-viewing device is by communicative couplings, to receive the view data from photosensor array 130.Preview processor 160 is handled this view data to be created in the preview image that shows on the preview screen 170.In some applications, preview processor 160 is realized on co-used chip and/or in the common circuit with imageing sensor 180.At sensing data treatment circuit 140 as stated in the application that photosensor array 130 is realized; Preview processor 160 usefulness sensing data treatment circuits 140 optionally realize, wherein photosensor array 130 view data of gathering partly or entirely is used to produce preview image.
Use be used to produce final image compare less relatively computing function and/or data still less can produce preview image.For example, when using when realizing such as the hand-held imaging device of camera or mobile phone, unreal incumbent what focus on or the preview image of lens correction just enough.Like this, expectation realizes that relatively cheap and/or less treatment circuit is to produce preview image.In these were used, the preview processor was through for example using above-mentioned first to extend depth of field computational methods, assessed the cost and/or used data still less to produce image with relatively low.
Depend on application, imaging system 100 can realize in various manners.For example, though microlens array 120 is shown to have some distinguishable lenticules as an example, that this array usually uses is a plurality of (for example several thousand, millions of), and lenticule is realized.Photosensor array 130 generally includes than microlens array 120 relative meticulousr spacings, wherein has some optical sensors for each lenticule in the microlens array 120.In addition, lenticule in the microlens array 120 and the optical sensor in the photosensor array 130 light that is positioned to usually propagate into photosensor array through each lenticule does not overlap with the light of propagating through contiguous microlens.
In different application, main lens 110 is along its optical axis translation (in the horizontal direction as shown in Figure 1), on the interesting target that focuses on illustrative desired depth " d " between by main lens and example imageable target 105.Through example, the light from a single point on the target 105 is shown for purpose is discussed.The single convergent point at lenticule 122 places of these light transmission to microlens array 120 focal planes.Lenticule 122 separates these light according to direction, thereby in the pel array under the lenticule, produces the focusedimage of main lens 110 apertures on the set of pixels 132.Figure 10 illustrates an exemplary method, this method separate light make a point from the main lens 1010 disperse and all light of arriving any position on the surface of same lenticule (for example 1022) by this lenticule conduction to converge in the same point on the optical sensor (for example 1023).This method shown in Figure 10 can for example realize (promptly realize main lens 110, realize microlens array 120 and realize photosensor array 130 with photosensor array 1030 with microlens array 1020 with main lens 1010) relatively with Fig. 1.
The graphical representation system that in microlens array 122, forms under the specific lenticule for imaging plane on the directional resolution of this position.In some applications, help sharpening lenticule image to strengthen directional resolution through the primary flat that makes lenticule focus on main lens.In some applications, lenticule is than the interval between microlens array and the main lens 110 to two magnitudes when young.In these were used, main lens 110 was located effectively from (optical infinity) at lenticular optics infinity; In order to focus on lenticule, photosensor array 130 is placed the plane that is positioned at lenticule depth of focus place.
Interval " s " between main lens 110 and the microlens array 120 is chosen in the lenticule depth of field, realize the sharpening image.In many application, this is accurate to about Δ x at interval
p(f
m/ Δ x
m) in, Δ x wherein
pBe the width of sensor pixel, f
mBe lenticular depth of focus and Δ x
mIt is lenticular width.In an application-specific, Δ x
pBe about 9 microns, f
mBe about 500 microns and Δ x
mBe about 125 microns, and the interval between microlens array 120 and the photosensor array 130 is accurate to about 36 microns.
Use each lenticular one or more and configuration realization microlens array 120.In an example embodiment, the lenticule plane with latent space qualitative change is used as microlens array 120.For example, microlens array can comprise even and/or inhomogeneous, square extension or non-square extension, regular distribution or irregular distribution and can repeat or unrepeatable pattern in lens, and the part that can randomly block.Lenticule self can be convex surface, non-convex surface, perhaps have the expectation physical direction of random appearance with realization light, and profile can one by one change with the lenticule on the plane.Optionally make up different distributions and lens-shape.These different execution modes be provided at array some regional spaces higher (correspondingly lower angle) and in other regional perspective higher (correspondingly lower space) sample pattern.A kind of purposes of this data is to be convenient to interpolation with expectation space and angular resolution in the coupling 4D space.
Figure 24 A illustrates the view (sight line is perpendicular to the plane) of the part of the microlens array of another example embodiment according to the present invention.Lenticule is foursquare and is distributed in the array regularly.
Figure 24 B illustrates the view of the part of the microlens array of another example embodiment according to the present invention.The lenticule plane distribution is not have rule or unduplicated, and lenticule is an arbitrary shape.
That Figure 24 C illustrates is relevant with another example embodiment of the present invention, through using the main lens that shown in Figure 24 A, has the distribution of convex shape and have circular iris, the image that on optical sensor, occurs.
In other example embodiment, use bigger and less lenticular routine to inlay.In one realizes, to the optical sensor data that obtains carry out interpolation with provide have inlay in the uniform sampling of one or more lenticular maximum spaces and angular resolution.
Figure 19 A-19F illustrate with one or more example embodiment of the present invention relatively, realize method such as above-mentioned mosaic array.Figure 19 A is the plan view from above that a plurality of lenticular example relative dimensions and arrangement are shown.Figure 19 B is the view through the picture shape that on photosensor array, forms after each lenticule projection among Figure 19 A.Figure 19 C is the cross-sectional view of array among Figure 19 A, shows lenticule and has identical f-number (f-number) and its focus at grade.This needs the less bigger lenticule of lenticule to be placed to more near the focal plane.This causes appearing at the more but zero lap of main lens image behind each lenticule, and focuses on the optical sensor that places the place, plane that comprises focus.
Figure 19 D illustrates comprising main lens 1910, inlaying lenticular cross-sectional view shown in Figure 19 A that realizes in the complete imaging device of microlens array 1920 and light sensor arrangement 1930 and the 19C according to another example embodiment of the present invention.Notice that though accompanying drawing has been shown several lenticules and each lenticule has some pixels, the actual number of lenticule and pixel can be selected with distinct methods, such as the resolution requirement through definite given application and realize each proper number.
Figure 19 E is that the u of expression from the main lens 1910 begins and the Descartes ray figure (though ray space is 4D, for clarity sake, ray space is illustrated as 2D) of the ray space that the s on microlens array 1920 finishes.The Descartes ray that the light of being sued for peace by each optical sensor (being designated A-P) shown in Figure 19 D is integrated among Figure 19 E illustrates.In 4D ray space completely, each optical sensor combines 4D light box.Compare with the optical sensor under the less lenticule, the optical sensor 4D box under the bigger lenticule is in that (u, v) axis of orientation has half the width (resolution of twice), and in that (x, y) spatial axes has the width width (half the resolution) doubly of twice.
In another example embodiment, the optical sensor value is interpolated in the regular grid, makes resolution and the coupling of the ultimate resolution in all in all.Figure 19 F illustrates this method, wherein through near the box value interpolation light box of representing each optical sensor value is cut apart.Shown in the 2D ray space in, each box is divided into two, but in the 4D space, each box is divided into four (every along its two longer sides is divided into two).In some embodiments, calculate through analyzing neighbor through the value of interpolation.In another embodiment, interpolation be embodied as the desired value neighborhood initially, do not cut apart the weighted sum of the value of box.
In some applications, to depend on that the mode based on the decision function of neighborhood intermediate value realizes weighting.For example, weighting can be along the axle interpolation that in the 4D function space, least possibly comprise the edge.Near the possibility at the edge this value can be according in the gradient magnitude of these position functional values and the Laplacian Composition Estimation of this function.
In another example embodiment, each lenticule (for example the array in Figure 19 D 1920 or similarly in) optical axis that makes them that slopes inwardly is the center with the aperture of main lens all.Aberration in the image that this method has reduced under lenticule, to form to the edge of array.
With reference to Fig. 1, lenticular aperture scale (the for example effective dimensions of lens split shed) also is chosen to be suitable for wherein being embodied as the special applications of picture configuration 100 in main lens 110 and the microlens array 120 once more.In many application, the relative aperture size is selected to the image gathered big as much as possible under nonoverlapping situation (that is, thereby light can close overlap on the adjacent photosensors) with needing.This method is through coupling main lens and lenticular f-number (focusing on than number, i.e. the ratio of the aperture of lens and effective focal length).The effective focal length of the main lens of representing with f-number in this case, 110 is the diameter of main lens aperture and the ratio between the distance " s " between main lens 110 and the microlens array 120.In the application of the primary flat of main lens 110 with respect to the plane translation at microlens array 120 places therein, the aperture of optionally changing main lens is to keep the ratio and thereby the size of the image that each lenticule forms down in the microlens array.In some applications, the pattern matrix bag under (for example effective) microlens array that is used to realize expecting such as the different main lens iris shapes of square aperture on the photosensor surface.
Below discuss with one of the present invention relate to the common application of the imaging device 100 of Fig. 1 relatively with a plurality of example embodiment.Consider two planes (two-plane) light fields " L " in the imaging device 100, L (u, v, s, t) expression edge and main lens 110 (u, v) intersect and with the plane of microlens array 12 at (s, t) light of crossing light ray propagation.Desired light transducer (for example pixel) on supposing to snap to grid in desirable lenticule and the photosensor array 130 in the microlens array 120; All light that are transferred to optical sensor also transmit through its square female lenticule in the microlens array 120, and transmission is through the conjugation square of optical sensor on the main lens 110.Main lens 110 is specified the medium and small four-dimensional box of light field with these two square area on the lenticule, and optical sensor is measured the total amount of light in the light set of being represented by this box.Correspondingly, this four-dimensional box in each light sensors light field, the light field that is therefore detected by photosensor array 130 is L (u, v, s, box filtration t), a line-sampling.
Figure 11 illustrate according to the present invention another example embodiment with the sensor pixel location map to L (u, v, s, t) method of light in the space with respect to institute's image data.Shown in Figure 11ly can for example be used for Fig. 1 with method this paper discussion, wherein each optical sensor in the photosensor array 130 is corresponding to sensor pixel.The image 1170 in the lower right corner is the down-sampling (downsample) of the initial data that reads from light sensor (optical sensor) 1130, and has the image that is circled 1150 that under a circular microlens, forms.The image 1180 in the lower left corner is to represent that around the amplification of the initial data part that is circled lenticule image 1150 one of them optical sensor value 1140 is irised out in lenticule.Since the image that this circular image 1150 is lens stops, (u, the v) coordinate of light 110 original positions shown in therefore selected locations of pixels provides on the main lens in the dish.The position of lenticule image 1150 provides (x, y) coordinate of light 1120 in the sensor image 1170.
Though with respect to accompanying drawing (and other example embodiment) mapping of sensor element to light has been discussed, the value that is associated with each sensor element is optionally represented through the value that optics arrives the light set of each specific sensor element by transmission.Therefore in the environment of Fig. 1, each optical sensor in the photosensor array can be implemented, and arrives the value of the light set of optical sensor through main lens 110 and microlens array 120 so that the expression transmission to be provided.That is, each optical sensor produces output in response to the light that incides on the optical sensor, and each optical sensor is used to provide the directional information about incident light with respect to the position of microlens array 120.
In an example embodiment, the resolution of microlens array 120 is selected to the expectation resolution of final image in the coupling application-specific.The resolution of photosensor array 130 is selected to and makes each lenticule cover optical sensor as much as possible as required, to mate the desired orientation resolution of this application, and the perhaps best resolution of attainable optical sensor.Like this, consider such as imaging type, cost, complexity and be used to reach the available devices of specified resolution that the resolution of imaging system 100 (and this paper discuss other system) optionally is adjusted to and is suitable for application-specific.
In case view data is caught via optics and transducer (for example using the imaging device 190 among Fig. 1), just can realize that multiple computing function and device are with image data processing optionally.In an example embodiment of the present invention, different optical sensor set is caught from each lenticular separately light, also will be sent to the calculating unit such as processor about the information of catching light.The image of scene calculates according to measured light set.
In the environment of Fig. 1, realize that sensing data treatment circuit 140 comprises the scene image of target 105,106 and 107 with image data processing and calculating.In some applications, also realize pre-viewing device 150 to produce preview image with preview processor 160, wherein preview image is presented on the preview screen 170.Preview processor 160 is optionally realized through sensing data treatment circuit 140, is wherein produced preview image with for example consistent with the method for following discussion mode.
In another embodiment, for each pixel from the image of sensor device output, calculating unit carries out weighting and summation to the subclass of measuring beam.In addition, calculating unit can be analyzed and make up the image collection that for example uses image combining method to calculate in the above described manner.Though the present invention is not necessarily limited to this application, various aspects of the present invention can be understood through the discussion to some concrete example embodiment of this calculating unit.
Relevant with each example embodiment, view data is handled at least a portion relate to the image that focusing more catching.In some embodiments, output image produces according to the photo on the expectation element that focuses on special scenes.In some embodiments, be focused in the certain desired degree of depth of universe (scene) through the photo that calculates, and defocusing blurring is the same as with conventional photograph increasing away from desired depth.Different depths of focus may be selected to different target in the scene is focused on.
Figure 12 illustrates some image 1200-1240 that the single light field that focuses on different depth again, measure from another example embodiment according to the present invention is calculated.Method shown in Figure 12 can for example be used such as imaging device as shown in Figure 1 and realize.
Another example embodiment illustrates focus method again according to the present invention for Figure 13 A and 13B.This method can for example realize through the calculating/processing unit such as the imaging system of sensing data treatment circuit 140 among Fig. 1.From each output pixel of imaging device (for example 1301) corresponding to three-dimensional (3D) point on the empty film plane 1310 (for example 1302).This void film plane 1310 is positioned at after the main lens 1330, wherein the expectation focal plane optical conjugate in this plane 1310 and the universe (not shown).That is, empty film plane 1310 is positioned at such position: the film plane expectation is positioned at this position to catch simple two dimension (2D) image (for example this position can be compared with the position of catching the 2D image conventional camera location with photographic film).Through by direction separated light (for example using the microlens array 120 of Fig. 1), optionally calculate the light that arrives empty film plane 1310.Like this, the value of output pixel 1301 can be through calculating pencil 1320 summations that converge on the corresponding 3D point 1302.The value of these light can be according to the data collection of being gathered by light sensor 1350.Simple in order to check, Figure 13 A illustrates the imaging configuration of 2D.In Figure 13 B, more near main lens the time, use selected universe depth of focus, be 1301 pairs of pencil 1330 summations of same pixel from 3D point 1340.
In some embodiments, required values of light is accurately not corresponding with the discrete sample position of being caught by light sensor.In some embodiments, values of light is estimated as the weighted sum of selected tight sample position.In some were realized, this method of weighting was corresponding to the four-dimensional filter kernel (filter kernel) of rebuilding continuous four-dimensional light field according to the discrete sensor sample.In some were realized, this four-dimension filter was through realizing corresponding to the four-dimensional tent function (tent function) of four linear interpolations of 16 adjacent sample in the space-time.
Figure 35 is that another example embodiment is used for according to the light sensor data computation that the receives flow chart of focusedimage again according to the present invention.At frame 3520, extract sub-aperture image set from light sensor data 3510, wherein each sub-aperture image is made up of the single pixel of pixel under each lenticule image at identical relative position place under its lenticule.At frame 3530, make up sub-aperture image set to generate final output image.Can be randomly with sub-aperture image translation relative to one another and synthetic so that desired plane focusing.
In another example embodiment, darkization that is associated with near the output image edge pixel is able to alleviate.For example, near the pixel the output image edge, some required light are at large to be obtained in measuring light field (they can exceed such as Fig. 1 in the space or the direction border of imaging device of microlens array 120 and photosensor array 130).For this darkization is in the application of not expecting, pixel value comes normalization through value (for example being caught by photosensor array) that will be related with pixel divided by the part of the light that in fact in measuring light field, finds.
As stated, to the various different calculation methods of different application choices.The various such methods of setting forth below are discussed.In some applications, accompanying drawing is carried out reference, and in other is used, all methods are usually described.In using each these, ad hoc approach can use such as the calculation type parts of sensing data treatment circuit 140 shown in Figure 1 and realize.
In another example embodiment, the formation method of each output pixel of specific imaging system is corresponding to empty camera model, and wherein empty film is rotated arbitrarily and/or optionally or distortion and empty main lens aperture correspondingly move as required and change its size.As an example; Figure 25 illustrates an example embodiment; If wherein the virtual image be allowed to 2510 inconsistent empty lens planes 2530, physics main lens plane on arbitrarily appearance after the empty lens stop 2520 of size, then equally calculate the virtual image as appearing on the empty film 2560.Through to transmission through vignette circle 2520 and converge in a little 2550, sue for peace according to its joining with at the light of the incident direction appointment on the light sensor 2570 and put 2550 pixel value on calculating corresponding to empty film 2560.
Figure 26 illustrates the method on the empty lens of the manipulation plane of another example embodiment according to the present invention.Optionally physics main lens or other benchmark tilt relatively for empty lens plane 2630 and/or empty film plane 2660.The image that uses this method to calculate has and the uneven gained universe of imaging plane focal plane.
In another example embodiment, like institute's illustration among Figure 27, empty film 2560 need not to be the plane, and can adopt Any shape.
The selectivity that the whole bag of tricks relates to different apertures realizes.In an example embodiment, the vignette circle on the empty lens plane is circular hole normally, and in other example embodiment, and the vignette circle is normally non-circular and/or be embodied as a plurality of zoness of different with Any shape.In these or other execution mode, the notion of " vignette circle " can be by vague generalization, and in some applications, corresponding to relating to the light data processing with the method corresponding to the light that can receive through selected " void " aperture.
In each execution mode, vignette circle method is through predetermined but function realization arbitrarily on the empty lens plane.Figure 31 A-31D illustrates the different scalar functions that combine one or more example embodiment optionally to be embodied as vignette circle function.Each function for example comprises smoothly changing value (like Figure 31 B illustration), realizes a plurality of zoness of different (like Figure 31 A illustration) and adopts negative value (like Figure 31 D illustration).In order to calculate the value of putting on the empty film, difference and all light of converging in the point on the empty film are through vignette circle Function Weighted and summation from the empty lens.In each other execution mode, end value is calculated through any computing function that depends on values of light.For example, computing function can not correspond to the weighting through vignette circle function, but can comprise the discontinuous program branches that depends on the test function value that values of light is calculated.
In other example embodiment,, therefore can select to calculate the method for output pixel independently owing to can realize in combination with other execution mode as herein described.For example, in an example embodiment, comprise that the orientation on empty lens plane and the parameter of vignette circle size change each output pixel continuously.In another example, shown in figure 32, the vignette circle function individual element ground that is used for the light of each output pixel of integration changes.In output image 3200, pixel 3201 is used vignette circle function 3210 and pixel 3251 use vignette circle functions 3250.
In another example embodiment, vignette circle function individual element ground changes.In one embodiment, this function is chosen to block from the special scenes light of desired portions not, such as the not expectation target in the prospect.
In another example embodiment, vignette circle parameter is selected on user interactions ground, and selects to handle the light data according to this.Figure 33 is the process flow diagram that such example embodiment is shown.In first frame 3310, this process receives data from light sensor.In frame 3320, the user selects the zone of output image; In frame 3330, the user selects image forming method; And in frame 3340, the user change institute's choosing method parameter, and visually check the scene image (for example on computer monitor) that calculates at frame 3350.Whether frame 3360 inspection users have accomplished editor and whether have turned back to frame 3330 image section.Whether frame 3370 inspection users treat editor's image section has been accomplished selection and whether has turned back to frame 3320.If editor accomplishes, then frame 3380 is preserved final edited image.
In another example embodiment, has the image that extends the depth of field through simultaneously an above target being focused on to calculate.In one realized, the depth of field of output image obtained extension through the conventional pattern imaging with the main lens aperture of stop down (size reduces).For each output pixel, use the light that is focused on the output pixel through the aperture littler (on empty lens plane) to carry out evaluation than the aperture that in the light sensing, uses.
In a realization that relates to example system shown in Figure 1 100, the depth of field obtains extending through the optical sensor value of extracting under each lenticule image, and wherein each optical sensor places identical relative position in each lenticule image.For Fig. 1; The extension of the depth of field has produced an image, wherein not only target 105 (because the correlation between distance ' ' d ' ' and " s ") on the focus and also possible owing to defocus blur such as object 106 and 107 at the object at different depth place also on focus.With to the gained image randomly the method for the extension depth of field that combines of down-sampling be to calculate effectively.This method is therein with being realized optionally in the sustainable application of noise of image generation that the image that is for example wherein produced is for preview purpose (for example being used on the preview screen 170 of Fig. 1, showing).The Fig. 4 that below discusses further relates to the method that produces preview image.
Figure 14 A illustrates the method that combines one or more example embodiment to calculate the image with different depth of field with 14B.Figure 14 A illustrates through focusing on image and the feature that calculates again.Notice that face in the feature is owing to the more shallow depth of field is blured.The middle line of Figure 14 B illustrates to be used such as the preceding paragraph final image that described extension depth of field method calculates that falls.
Fig. 8 is the flow chart of another computational methods of the depth of field in the extension bitmap picture of another example embodiment according to the present invention.In frame 810, an image set that focuses on all depths of focus of special scenes is again focused on once more.At frame 820,, confirm a pixel according to a image set with the highest local contrast for each pixel.At frame 830, the pixel with the highest local contrast is formed a final virtual image.Use this method, the signal to noise ratio (snr) that uses a large amount of relatively pixel (for example with respect to selecting single pixel (optical sensor)) to obtain to expect as each lenticule in the microlens array.With reference to Figure 14 C, shown example image is to use with combining the described similar method of Fig. 8 and produces, and presents relatively low picture noise.
In an optional embodiment, according to each more empty film plane and the image of focusedimage optical transmission to empty film plane institute via the main lens primary flat between distance, the minimal set of focusedimage more to be calculated defines as follows.Minimum range is set in the focal length of main lens, and ultimate range is set in the alternate depths of nearest target in the scene.Spacing between each empty film plane is not more than Δ x
mF/A, wherein Δ x
mBe lenticular width, f is the spacing between main lens and the microlens array, and A is the width of lens stop.
In another example embodiment, a plurality of focusedimages again are combined, and extend the pixel that depth image is retained in optimum focusing in any of focused view image set again to produce at each final pixel place.In another embodiment, pixel to be kept is selected through the local contrast and the coherence of enhancing with neighborhood pixels.For about the general information of imaging and about relating to the specifying information of the formation method that strengthens the local contrast; Can be with reference to No. 3 292-300 pages or leaves of 2004 the 23rd volumes of ACM Transaction on Graphics; " Interactive Digital Photomontage (the interactive digital picture collection of choice specimens) " of A.Agarwala, M.Dontcheva, M.Agrawala, S.Drucker, A.Colburn, B.Curless, D.Salesin, M.Cohen, this article is incorporated into this through using integral.
In another example embodiment of the present invention, extend depth image and calculate as follows.For each output image pixel, focus on again to calculate and carry out to focus at different depth at the pixel place.In each degree of depth, calculate the tolerance that converges the light homogeneity.Select to produce the degree of depth of (relatively) maximum homogeneity and this pixel value is kept this degree of depth.Use this method, in focus the time, all its light are derived from the same point in the scene at image pixel, and therefore might have similar color and intensity.
Though can define homogeneity tolerance with distinct methods; But for many application; Use following homogeneity tolerance:, calculate the variance of colouring intensity according to the respective color component of central light beam (to arrive the light of pixel) near the angle of main lens optical axis for each color component of each light.To the summation of all these variances, and with homogeneity be taken as this and inverse.
Figure 34 is the flow chart of the depth of field in the extension bitmap picture of another example embodiment according to the present invention.At frame 3410, in the virtual image to be calculated, select pixel.At frame 3420, this pixel is focused on a plurality of focal lengths again, and calculation combination becomes to focus on the homogeneity of the light of each degree of depth again.At frame 3430, the value of focused pixel again that is associated with the maximum homogeneity of the light that is combined is left final output image pixel value.This process continues at frame 3440, is processed up to all pixels.
In another example embodiment, said process is adjusted to the selection that makes final pixel value and considers adjacent pixel values and the combined homogeneity of calculating the related light of these pixel values.
In the another example embodiment of the present invention, the depth of field obtains extending through the degree of depth that each pixel is focused on this direction near object.Fig. 9 is the flow chart that is used for the extension bitmap picture depth of field according to a this example embodiment.At frame 910, select the pixel in the final virtual image to be calculated.At frame 920,, estimate the degree of depth of nearest object for getting into the light (or light collection) of scene from selected pixel scioptics central transmission.
At frame 930, in the image that focuses on the estimation degree of depth again, calculate the value of selected pixel.If obtained expectedly handling, then select another pixel, and this process continues in 920 pairs of new pixels of selecting of frame at frame 910 in frame 940 additional pixels.When there not being additional pixels expectedly to be handled at frame 940, then the calculated value of each selected pixel is used to set up the final virtual image.
Relate in the execution mode that the depth of field extends at some, the light through not considering to be derived from the degree of depth nearer than the degree of depth of desired object is to alleviate or to eliminate the pseudomorphism (artifact) that for example is commonly referred to " blooming (blooming) " or " halation " that centers on more near the object edge of lens.As an example, Figure 23 illustrates the light set of transmission through the expectation focus 2301 of universe on the interesting target 2310.Part in these light by object 2320 from main lens blocking-up, and these light corresponding to 2350 detected by light sensor, but the light of in the image value of calculation level 2301, not considering 2340.In some embodiments, the pixel value that obtains is through the normalization divided by a part of light of not considering.These execution modes can be individually or use with being bonded to each other, and relate to any other execution mode that extends depth of field use that combines with comprising.
As stated, handle the light data to focus on and/or correcting image according to each example embodiment.The whole bag of tricks that relates to the post-equalization method is described below.In in these execution modes some, through following the trail of light, and the light of being followed the trail of is mapped to the specific light transducer of catching this light comes aberration correction through the actual optical components (for example lens or set of lenses) of the optics that when catching light, uses.Known defect that use is represented by optics and the known location that detects the transducer of this light, the light data are able to arrange again.
In a correction type execution mode, calculate the light universe that contributes to each pixel that forms through idealized optics for each pixel on the film of synthetic photograph.In one realizes, calculate these light through following the trail of the light of getting back to universe through the perfect optics parts from empty film position.Figure 15 illustrates and combines such example embodiment to follow the trail of from the 3D point on the empty film plane 1,510 1501 through the method for desirable thin main lens 1520 to the light of universe pencil 1530.In some realizations, expectation light set 1525 can still can produce any light set of desired image value corresponding to will and suing for peace by weighting corresponding to the direction through actual lens.
Figure 16 illustrates the method for the value of the light that obtains propagating along ideal light rays for application-specific that combines another example embodiment.These values are calculated through the desirable universe light 1630 of the expectation that follow the trail of to see through actual main lens 1650, and this main lens is to have the discrete component of globular interface and be used for when measuring (detection) this light actual gamut light direct light line sensor 1640 physically.In this embodiment, the light that ideal converges in single 3D point (1601) does not converge, and this expression has the defective that is called ball-shaped aberration of the lens of globular interface.Light sensor 1640 provides aberration light (such as 1651) each each value that is used to proofread and correct ball-shaped aberration.
Figure 17 A-17C illustrates the example results of the computer simulation of using the scioptics bearing calibration.Image among Figure 17 A is desirable 512x512 photograph (visible through the perfect optics parts).The image that image among Figure 17 B is to use actual f/2 biconvex globe lens to produce, it has the loss of contrast with fuzzy.Image among Figure 17 C is to be convenient at each 512x512 lenticule 10x10 of place direction (u, the v) optics of resolution and sensor device, the photograph that uses above-mentioned method for correcting image to calculate through use.
In another example embodiment of the present invention, proofread and correct the aberration of the main lens be used to catch image.Aberration be by light physics guiding during through optics since dispersing of depending on that the difference of optical wavelength on physical direction causes cause.Consider to occur in the relevant optical index of wavelength in the actual light optic, incident ray is followed the trail of through the actual light optic.In some applications, distinguish each color component of tracing system according to dominant wavelength.
In another example embodiment, shown in Figure 18 A, follow the trail of each redness, green and the blue component that coexist in the color imaging system respectively.Green universe light returns imaging system with generation green light 1830 through calculating to follow the trail of, and confirms that they intersect with colored light transducer 1810 wherein and they in what direction and colored light transducer 1810 intersect.Similarly, Figure 18 B illustrates through calculating and follows the trail of the blue universe light 1820 of expectation, compares these light of green light and is refracted into bigger scope.Figure 18 C illustrates through calculating and follows the trail of the red universe light 1830 of expectation, compares these light of green light and is refracted into small range.Use the method basis that for example combines other example embodiment described in this paper to describe to calculate the value of each light from the value of light sensor 1810.The light field value of each light is carried out integration to calculate the correcting image value of each particular film pixel.Use for some, aberration is through being improved on the plane that each color channel is focused on its wavelength optimum focusing.
The not accurately convergence on one of discrete values of light of sampling of expectation light by light sensor.In some embodiments, the value that is used for these light is calculated as the function of discrete values of light.In some embodiments, this function is corresponding to the weighted sum of the discrete values of light in the expectation light field.In some embodiments, this weighted sum is corresponding to the 4D convolution of the discrete sampling value of using predetermined convolution core (kernel) function.In other embodiments, weighting can be corresponding to four linear interpolations of carrying out according to 16 arest neighbors.In other execution mode, weighting can be corresponding to three times that carry out from 16 arest neighbors or bicubic interpolation.
It should be noted that for notion and described the example trimming process according to ray tracing simply; The also available correction of various other methods realizes.In one embodiment, for each desired output pixel, calculate optical sensor value collection and the relative weighting thereof that contributes in advance.As stated, these weights be comprise optics, transducer, will be to the attribute of many factors of the expectation light set of each output pixel weighting and summation and expectation light field reconstruction filter.These weights are optionally used ray tracing to calculate in advance, and storage.Calibrated image forms through the suitable sensing light field value of weighting and each output pixel that adds up.
Figure 29 and Figure 30 illustrate other example embodiment that combines above-mentioned bearing calibration to use.Figure 29 is the process flow diagram that is used for calculating in advance weight database that is associated with light (light) transducer and the output pixel value that is associated with each light sensor.In preceding two frames 2910 and 2920; For the desired image forming process; Reception is by requiring and forms the data set (for example in the database) of (for each output image pixel) with the desirable universe light set that produces the output image pixel value, and reception is used for light physically is transmitted to the standard of the actual main lens optics of light sensor.At frame 2925, select an image pixel.For the output valve of this pixel, follow the trail of universe light relation integration to light sensor ground through the virtual representation of main lens optics in frame 2930 usefulness computational methods.This causes putting on each light sensor value to calculate the weight sets of output pixel value.These values are stored in the output database in frame 2940.All pixels that whether frame 2950 inspections are treated are if not then returning frame 2925.If handled all pixels, the database that then final frame 2960 protections are accomplished.
Figure 30 is to use the flow chart that calculates the process of output image through the weight database of calculating like the process among Figure 29.In frame 3010 and 3020, this process receives database and the set of the light sensor value that the main lens optics that uses when calculating this database is caught.At frame 3025, select the pixel in the output image, make its final image value to calculate.For selected pixel, the light sensor that frame 3030 uses database to seek and makes contributions is gathered and weight.At frame 3040, for this image pixel value, each sensor values that in 3020, provides is by weighting and summation.At frame 3050, check to check and handled all images pixel whether.If not, then this process is returned frame 3025, if then preserve output image at frame 3060.
In various example embodiment, use some method that focuses on computational methods again that relates to computing in Fourier domain, in frequency domain, handle the light data.Figure 20 combines another example embodiment that the flow chart of a this method is shown.The input of this algorithm be expressed as L (s, t, u, discrete 4D light field 2010 v), (u v) begins and (s, the light that t) stops (for example coming since the main lens 110 of Fig. 1 and in the termination of the plane of microlens array 120) on the lenticule plane from main lens in its expression.The first step is to calculate the discrete 4D Fourier transform 2020 of light field.Be called M (k
s, k
t, k
u, k
v) at (k
s, k
t, k
u, k
v) on 4D Fourier transform value define by following equation:
Wherein the exp function is the index function, exp (x)=e
xIn some embodiments, on the rectilinear grid in 4D space, discrete light field is sampled, and use fast Fourier transform (FFT) algorithm computation Fourier transform.
Next step, to each degree of depth of expecting again focusedimage carry out once be the suitable 2D sheet 2030 that extracts the 4D Fourier transform, and calculate the 2D inverse fourier transform that extracts sheet, they are the photographs 2040 that focus on different depth.For function G (k
x, k
y), 2D inverse fourier transform g (x, y) by following equation definition:
The value of extracting on the 2D sheet is confirmed according to the degree of depth that expectation focuses on.Consider the conjugate planes (in the image-side of lens) of expectation universe focal plane, when the spacing between these conjugate planes and the main lens is D and spacing between lenticule plane and the main lens when being F, coordinate (k then
x, k
y) in extract the 2D sheet value provide by following formula
G(k
x,k
y)=1/F
2·M(k
x(1-D/F),k
y(1-D/F),k
xD/F,k
yD/F) (3)
Use diverse ways, the pseudomorphism that causes because of discretization, resampling and Fourier transform is optionally improved.Interim when general signal processing, when signal was sampled, it periodically duplicated in two territories (dual domain).When rebuilding this sampled signal through convolution, it multiplies each other with the Fourier transform of convolution filter in two territories.Like this, initially, center duplicate (replica) is separated, thereby eliminates other all duplicate.The expectation filter is a 4D sinc function, sinc (s) sinc (t) sinc (u) sinc (v), sinc (x)=sin (π x)/(π x) wherein; Yet this function has unlimited extension.
In the whole bag of tricks, limited extension filter is used for frequency domain to be handled; This filter can present the defective that being selected property alleviates.Figure 21 A combines hereinafter to relate to alleviate or the corresponding discussion of this defective illustrates these defectives about concrete 1D filter.Figure 21 A representes to realize with the 1D linear interpolation triangular filter of the basis of 4D four linear filters (or as).Figure 21 B illustrates the Fourier transform of triangular filter method, and it is the non-unit value (referring to 2010) in the frequency band limits, and is reduced to littler fractional value gradually with the frequency increase.In addition, this filter is not real frequency band limits, and it is included in the energy (2020) under the outer frequency of expectation rejection band.
Above-mentioned first defective causes causing " (rolloff) pseudomorphism tilts " of calculating the obfuscation of photograph border.The decay that filter spectrum increases with frequency representes by the spatial light field value of this spectrum modulation also that towards the border " inclination " is to fractional value.
Above-mentioned second defective relate to in frequency band limits to obscure pseudomorphism (aliasing artifact) in the relevant calculating photograph of the energy of upper frequency.Extension surpasses the non-zero energy of frequency band limits and representes that the periodic repetitions article are not eliminated fully, thereby causes two kinds to obscure.The first, occur as the 2D duplicate of occupying final photograph border with the duplicate of the parallel appearance in amplitude limit plane (slicing plane).The second, with the duplicate of this plane perpendicular positioning by projection and be added on the plane of delineation, thereby create the loss of afterimage and contrast.
In an example embodiment, the correction of above-mentioned apsacline defective is eliminated through the reciprocal multiplication that will import light field and the reverse fourier spectrum of filter, to offset the effect of in the resampling process, introducing.In the illustrated embodiment, multiply each other and in the pre-treatment step of algorithm, carry out carrying out before the 4D Fourier transform.Though it has proofreaied and correct heeling error, multiply each other in advance and can increase the weight of the energy of light field, thereby maximize overlapping go back to expectation visual field as the energy of obscuring near its border.
Three kinds of inhibition are obscured the method-over-sampling (oversampling) of pseudomorphism, senior filtration (superiorfiltering) and zero padding (zero-padding)-in following various example embodiment separately or be used in combination.The over-sampling that extracts in the 2D sheet has increased the replicative cycle in the spatial domain.Less energy in this presentation surface in the duplicate afterbody (tail) drops in the border of final photograph.The sampling rate that increases in the territory causes the increase of visual field in another territory.The energy of obscuring in the contiguous duplicate drops in these outer peripheral areas, and is pruned away to isolate interested initial center image.
Alleviate another method of obscuring and relate to the limited extension filter of the approximate perfect frequency spectrum of near-earth (can appear) as far as possible through using ideal filter.In an example embodiment, and 4D Caesar-Bezier (Kaiser-Bessel) separable function kb4 (s, t, u, v)=kb (s) kb (t) kb (u) kb is (v) as filter, wherein
In this equation, I
0Be first kind Caesar-Bessel function that the standard zeroth order is proofreaied and correct, W is the width of expectation filter, and P is the parameter that depends on W.In the illustrated embodiment, the value of W is 5,4.5,4.0,3.5,3.0,2.5,2.0 and 1.5, and the value of P is respectively 7.4302,6.6291,5.7567,4.9107,4.2054,3.3800,2.3934 and 1.9980.For general information about obscuring; And for about combining one or more example embodiment of the present invention to alleviate the specifying information of the method for obscuring; Can be with reference to " Selection of convolution function for Fourierinversion using gridding (the using gridding to select the convolution function of reverse Fourier) " of 10 volumes No. 3 473-478 page or leaf J.L.Jackson, C.H.Meyer, D.G.Nishimura and the A.Macovski of IEEETransactions on Medical Imaging in 1997, document integral body by reference is incorporated into this.In one realizes, realize less than about 2.5 width " W " to reach the desired images quality.
In another example embodiment, obscure through before multiplying each other in advance, filling to have the light field on less null value border and carry out Fourier transform being able to alleviate.This pushes away the border a little with energy, and minimizes the energy of obscuring that is caused by multiplying each other in advance of slant correction and amplify.
Figure 22 is that another example embodiment illustrates and uses above-mentioned various corrections flow chart of method of focusing again in frequency domain according to the present invention.At frame 2210, receive discrete 4D light field.Carry out pretreatment stage once in each input light field, whether frame 2215 inspection obscures that to reduce be desired, and if then carries out the frame 2220 with little null value border (for example on this dimension 5% of width) filling light field.At frame 2225, check determining whether that slant correction is desired, and if then through the inverse of the Fourier transform of resample filter light field is modulated at frame 2230.In the after-frame of pretreatment stage, calculate the 4D Fourier transform of light fields at frame 2240.
Carry out the stage of focusing more once at each expectation focal length, this process receives the expectation focal length of focusedimage again in frame 2250 such as the guiding through the user.At frame 2260, check that to reduce be desired to determine whether to obscure.If not, then frame 2270 uses expectation 4D resample filter to extract the 2D sheet of light field Fourier transform, and wherein the track of 2D sheet is corresponding to the expectation focal length; And frame 2275 calculates the 2D inverse fourier transform of the sheet that extracts and proceeds to frame 2290.To reduce be desired if obscure at frame 2260, and then process proceeds to frame 2280, wherein uses expectation 4D resample filter and over-sampling (the for example 2x over-sampling on each of two dimensions) extraction 2D sheet.At frame 2283, calculate the 2D inverse fourier transform of sheet and the image trimming that obtains is become original dimension and need not at frame 2286 over-samplings, process proceeds to frame 2290 after frame 2286.At frame 2290, detect to confirm to focus on again whether accomplish.If, then do not select another focal length and process to carry out as described above at frame 2250.Accomplish if focus on, then process withdraws from frame 2295 again.
Through as more than optional execution mode described clearly to light summation, the progressive computational complexity of this frequency domain algorithm is less than focusing on again.Suppose that the discrete light field of input has N sample on each of its four dimensions.Then for focusing in each new degree of depth, the computational complexity to the algorithm of light summation is O (N clearly again
4).For focusing in each new degree of depth, the computational complexity of frequency domain algorithm is O (N again
2LogN), mainly be the cost of 2D inverse fourier transform.Yet pre-treatment step consumes O (N for each new light field data collection
4LogN).
In another example embodiment, the light of being caught is by the optics filtering.Though be not limited to this application, some examples of this filter are neutral density filters, chromatic filter, Polarization filter.The filter that any existing filter maybe will be developed can be used for light is carried out required filtration.In a realization, light is made each group or indivedual light by the filtering of different ground by in groups or optics filtering individually.In another was realized, the spatial variations filter that is additional to main lens through use was realized filtering.In an example application, be used for filter light such as the gradient filter of neutral density gradient filter.In another was realized, the spatial variations filter is one or more in light sensor, microlens array or photosensor array to be used before.With reference to Fig. 1, as an example, optionally with one or more such filters place main lens 110, microlens array 120 and photosensor array 130 one or more before.
In another example embodiment of the present invention, programming such as the calculating unit of processor to be optionally selecting to be combined in the light that calculates in the output pixel, so that the clean filtration that this pixel value is expected.As an example, consider to relate to the neutral gradient density of the optics filter at main lens place, each lens stop image that under lenticule, occurs is according to the filter gradient weighting of its scope of leap.In one realized, the optical sensor through on the point that is chosen in the gradient that is in the aspiration level coupling that the neutral density with the output image pixel filters under each lenticule calculated output image.For example, in order to produce the image that each pixel is wherein filtered on bigger scope, each pixel value is set to and under corresponding lenticule, is in the extreme sensor values of the gradient corresponding with maximum filtering.
Fig. 2 combines other example embodiment of the present invention that the data flowchart of the method for handling image is shown.Image sensor apparatus 210 with for example use lenticule/optical sensor chip apparatus 212 acquisition of image data shown in Fig. 1 and in above-described microlens array 120 and photosensor array 130 similar modes.Image sensor apparatus 210 comprises the integrated treatment circuit 214 that carries a certain treatment circuit optically, with the preparation for acquiring view data so as the transmission.
The sensing data that on image sensor apparatus 210, produces is transferred to signal processor 220.This signal processor comprises low-resolution image processor 222 and compression processor 224 and one of (light) line direction processor 226 or both; Each of these processors depends on that application realizes separately or uses common processor to realize functionally.In addition, each processor selection property as shown in Figure 2 ground programming has the one or more processing capacities that combine other accompanying drawing or other paragraph of this paper to describe.Signal processor 220 is randomly realized on common device or parts with image sensor apparatus 210, for example in common circuit and/or on the common images device.
Low-resolution image processor 222 uses the sensing data that receives from image sensor apparatus 210 to produce low resolution image data, and this low resolution image data is sent to visits display 230.Send images such as the entering apparatus of button on camera or the video camera 235 and catch and ask signal processor 220, for example be captured in request and visit the specific image that shows in the display 230 and/or when so realizing, begin video imaging.
In response to the image request of catching or other guide, signal processor 220 uses the sensing data of being caught by image sensor apparatus 210 to produce treated sensing data.In some applications, compression processor 224 is realized as and produces the former data of compression of transferring to data storage device 240 (for example memory).Then, this former data are optionally handled at signal processor 220 and/or outer computer 260 or other processing unit, thereby realize handling such as the radiation direction of realizing with the radiation direction processor that is described below 226.
In some applications, radiation direction processor 226 is realized as the processor data that processing receives at signal processor 220, to arrange the sensing data that is used to produce focusing and/or image correcting data again.Radiation direction processor 226 uses the sensing data that receives from image sensor apparatus 210 and one of former data that are sent to data storage device 240 or both.In these are used; Radiation direction processor 226 uses the light map feature of specific imaging device (for example camera, video camera or mobile phone), and image sensor apparatus 210 is realized as the arrangement of confirming with the light of lenticule/optical sensor chip 212 sensings again in these imaging devices.To send to data storage device 240 and/or be used for various application with the view data that radiation direction processor 226 produces, such as making image data stream be sent to long-range place or view data being sent to long-range place to communication link 250.
In some applications, integrated treatment circuit 214 comprise through other processor of realizing CMOS type processor for example or having a suitable function signal processor 220 processing capacity some or all.For example, low-resolution image processor 222 optionally includes integrated treatment circuit 214, and low resolution image data directly sends to visiting display 230 from image sensor apparatus 210.Similarly, compression processor 224 or its similar functions optionally realize with integrated treatment circuit 214.
In some applications, the calculating of final image can be carried out (for example only exporting in the Digital Still Camera of final image at some) on integrated treatment circuit 214.In other was used, image sensor apparatus 210 can send to the external computing device such as desktop PC simply with the compressed version of original light data or these data.Externally carry out on the device then according to these data computation final images.
Fig. 3 is the flow chart of the method for processing image data of another example embodiment according to the present invention.At frame 310, use main lens or have set of lenses such as lenticule/photosensor array shown in Figure 1, on camera or other imaging device, catch view data.If at frame 320 preview images is desired, then use the display of viewfinder for example or other type to produce preview image at frame 330.The subclass of the view data that use is caught shows this preview image on the viewfinder of for example camera or video camera.
At frame 340, be processed and compress for use from the former data of photosensor array.At frame 350, from extracting data light data treated and compression.This extraction relates to light shafts or the set that specific light transducer in the photosensor array is incided in for example detection.Catch imaging device retrieves ray mapping (enum) data wherein at frame 360 for view data.At frame 370, the light mapping (enum) data is used for synthetic Pareto diagram picture again with the light data of being extracted.For example, extraction, mapping and synthesising frame 350-370 are gathered the Ray Of Light of the specific pixel of scene through confirming light, and the integral light heat input optionally realizes with the value of synthetic specific pixel.In some applications, the light mapping (enum) data is used for following the trail of through the actual lens that is used to obtain view data the light of each specific pixel.For example, so that focus on the suitable light set of the selected target at particular focal length place, this light can be arranged to arrive focusedimage again through confirming to be accumulated in together at frame 370.Similarly, through confirming suitable light arrangement to proofread and correct the situation such as the lens aberration in the imaging device, this light can be arranged the image that is not relevant to the characteristic of aberration or other situation comparatively speaking to produce again.
The whole bag of tricks optionally is used to produce the preview image of camera-type and other application.Fig. 4 is the process flow diagram of this preview image of generation of another example embodiment according to the present invention.The method of describing shown in Fig. 4 and hereinafter can with for example produce preview images and realize in combination at the frame 330 of Fig. 3.
In the instruction for previewing of frame 410 receptions to original sensor data.At frame 420, center pixel each lenticule image from the raw sensor view data is selected.Collect selected center pixel to form high depth image at frame 430.At frame 440, high depth image is carried out down-sampling to be suitable for visiting exploration on display resolution ratio.With reference to Fig. 2, as an example, this down-sampling optionally carries out on one or more image sensor apparatus 210 or signal processor 220.At frame 450 the preview image data that produce are sent to the visiting display, and at frame 460, viewfinder is with preview image data display image.
Fig. 5 is the processing of another example embodiment according to the present invention and the flow chart of compressing image data.The method of describing shown in Fig. 5 and hereinafter can with realize in combination in 340 processing of Fig. 3 center and compressing image data.When realizing with device as shown in Figure 2, method shown in Figure 5 can realize with one of signal processor 220 or both at for example image sensor apparatus 210.
At frame 510, receive raw image data from sensor array.If carry out paintedly in frame 520 expectation, then remove mosaic, to produce color at the transducer place in 530 pairs of chromatic filter array of values of frame.If expect to adjust and align at frame 540, then lenticule aligns by adjustment and with photosensor array at frame 550.If carry out interpolation in frame 560 expectations, then pixel value be interpolated into the integer related with each a lenticule pixel 570.At frame 580, treated raw image data is compressed and appears so that synthetic processing the (for example form and focus on and/or correcting image again).
Fig. 6 is the synthetic flow chart of image of another example embodiment according to the present invention.The image combining method that the method for also describing hereinafter shown in Fig. 6 can illustrate and further describe hereinafter with the frame 370 at Fig. 3 is realized in combination.
At frame 610, receive raw image data from photosensor array.If focus on again in frame 620 expectations, then use method for example as herein described that view data is focused on again at frame 630, arrange the light of representing by raw image data again with selectivity.If carry out image rectification in frame 640 expectations, then at frame 650 image correcting datas.In different application, focus on again in the application with image rectification in expectation, before the image rectification of frame 650 or carrying out simultaneously with the focusing again of frame 630.At frame 660, use to comprise through focusing on treated view data (but time spent) the generation gained image with corrected data again.
Fig. 7 A is that the use lens devices of another example embodiment according to the present invention carries out the flow chart that image focuses on again.The method that illustrates and describe hereinafter at Fig. 7 can for example focus on realization with the view data of Fig. 6 center 630 again with combining.
At frame 710, select to be used for again the empty focal plane of focusedimage part.At frame 720, select the virtual image pixel of empty focal plane.If proofread and correct (for example for lens aberration), then calculate vignette line (or the set of the vignette line) value of between selected pixel and each certain lenses position, passing through at frame 740 in frame 730 expectations.In one uses, be convenient to this calculating through conjugate rays and this light of path tracing in the scioptics device that calculating drops on the selected pixel.
At frame 750, the values of light of each lens position of the specific focal plane of adding up (or the set of vignette line) sum is to confirm the total value of selected pixel.In some applications, frame 750 that add up be weighted sum, the given bigger weight of some light (or light set) wherein than other.If have the additional pixels that is used for focusing on again, then select another pixel and continue this process up to there not being pixel to need to focus on again at frame 720 at frame 760.After pixel is focused on again, in of the again focusing virtual image of frame 770 packed-pixel data with the place, empty focal plane that is created in frame 710 and selects.The part or all of focus method again that relates to Fig. 7 center 720,730,740 and 750 is implemented through the more specifically function of various application.
The sensing data treatment circuit that uses one or more example embodiment as herein described to realize depends on that realization can comprise one or more microprocessors, application-specific integrated circuit (ASIC) (ASIC), digital signal processor (DSP) and/or programmable gate array (for example field programmable gate array (FPGA)).Like this, the sensing data treatment circuit can be present known or any kind of later exploitation or the circuit of form.For example, the sensing data treatment circuit can comprise the realization that is coupled, provides and/or carry out active and/or passive single parts or a plurality of parts (microprocessor, ASIC and DSP) of desired operation/function/application.
In different application, the sensing data treatment circuit is realized or is carried out and realize described herein and/or the ad hoc approach that illustrates, one or more application, routine, program and/or the data structure of task exclusive disjunction.The function of application, routine or program is optionally made up or is distributed in some applications.In some applications, application, routine or program are used one or more realizations in the various programming languages of known or later exploitation through transducer (or other) data processing circuit.This programming language comprises and for example combines the compiling that the one or more aspects of the present invention optionally realize or FORTRAN, C, C++, Java and the BASIC of not compiling.
Above-mentioned various execution mode only provides as explanation and should not be interpreted as limitation of the present invention.According to above description and explanation, it will be apparent for a person skilled in the art that and to carry out various changes and change to the present invention, and needn't strict follow illustrative embodiments and the application that this paper illustrates and describes.For example, this variation can be included in the dissimilar application realize various optical imagery application and device, increase reduce light number that each pixel (or other selected image-region) gathered or realization and said example different algorithms and/or equation with collection or image data processing.Other variation can comprise that use perhaps is additional to the coordinate representation of cartesian coordinate, for example polar coordinates except cartesian coordinate.This change and variation do not deviate from connotation of the present invention and scope.
Claims (12)
1. one kind is used for from the digital imaging system of the light set composograph of being caught, and said system comprises:
Main lens, said main lens are used for light is guided to the physics focal plane;
Be used to catch the photosensor array of light set;
Microlens array on the plane between said main lens and the said photosensor array, said light set physically is directed to said photosensor array from said main lens through said microlens array;
Data processor uses the virtual redirection information of the said light set of being caught by said photosensor array to calculate synthetic focusedimage again, and said focusedimage again is focused on the virtual focal plane different with said physics focal plane.
2. the system of claim 1 is characterized in that, said data processor comes computed image through the light of being caught that optionally makes up said light set.
3. the system of claim 1 is characterized in that, said data processor comes computed image through the light of the said light set that optionally adds up.
4. the system of claim 1 is characterized in that, said data processor comes computed image through the light of the optionally weighting and the said light set that adds up.
5. the system of claim 1; It is characterized in that spatial distribution and the set of (ii) said light that said data processor uses (i) said light to be integrated on the said photosensor array come computed image from said main lens through the physical direction that said microlens array arrives said photosensor array.
6. digital imaging system as claimed in claim 1; It is characterized in that said data processor is redirected the set of said light the part of said image is focused on different with the plane of placing said microlens array and different with the plane of the placing said photosensor array virtual focal planes again virtually virtually.
7. digital imaging system as claimed in claim 1 is characterized in that, said data processor uses the said virtual redirection information of the light set of being caught by said photosensor array to come computed image with the correcting lens aberration.
8. digital imaging system as claimed in claim 1 is characterized in that said focusedimage again comprises the depth of field of extension.
9. digital imaging system as claimed in claim 1 is characterized in that, for each lenticule in the said microlens array, said photosensor array comprises a plurality of optical sensors.
10. digital imaging system as claimed in claim 9; It is characterized in that; Said main lens focuses on the two dimensional image of scene on the said microlens array; And each lenticule in the wherein said microlens array is adjusted to disperse by said main lens and focuses on light above that, and the light of being dispersed is directed to said lenticular a plurality of optical sensors.
11. digital imaging system as claimed in claim 1; It is characterized in that said data processor uses data from said photosensor array, creates the focuson image through the different depths of focus of resolving each subimage from the different piece that scene is shown by the light of being caught.
12. digital imaging system as claimed in claim 11 is characterized in that, said data processor is through the synthetic final image of the said focuson image of combination.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US61517904P | 2004-10-01 | 2004-10-01 | |
US60/615,179 | 2004-10-01 | ||
US64749205P | 2005-01-27 | 2005-01-27 | |
US60/647,492 | 2005-01-27 |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB200580039822XA Division CN100556076C (en) | 2004-10-01 | 2005-09-30 | Imaging device and method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101426085A CN101426085A (en) | 2009-05-06 |
CN101426085B true CN101426085B (en) | 2012-10-03 |
Family
ID=38965761
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2008101691410A Expired - Fee Related CN101426085B (en) | 2004-10-01 | 2005-09-30 | Imaging arrangements and methods therefor |
CNB200580039822XA Expired - Fee Related CN100556076C (en) | 2004-10-01 | 2005-09-30 | Imaging device and method thereof |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB200580039822XA Expired - Fee Related CN100556076C (en) | 2004-10-01 | 2005-09-30 | Imaging device and method thereof |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN101426085B (en) |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4941332B2 (en) * | 2008-01-28 | 2012-05-30 | ソニー株式会社 | Imaging device |
JP5472584B2 (en) * | 2008-11-21 | 2014-04-16 | ソニー株式会社 | Imaging device |
JP4706882B2 (en) * | 2009-02-05 | 2011-06-22 | ソニー株式会社 | Imaging device |
JP5463718B2 (en) * | 2009-04-16 | 2014-04-09 | ソニー株式会社 | Imaging device |
EP2244484B1 (en) | 2009-04-22 | 2012-03-28 | Raytrix GmbH | Digital imaging method for synthesizing an image using data recorded with a plenoptic camera |
JP5515396B2 (en) * | 2009-05-08 | 2014-06-11 | ソニー株式会社 | Imaging device |
ES2607052T3 (en) * | 2009-06-17 | 2017-03-29 | 3Shape A/S | Focus scanning apparatus |
DE102009027372A1 (en) | 2009-07-01 | 2011-01-05 | Robert Bosch Gmbh | Camera for a vehicle |
JP6149339B2 (en) * | 2010-06-16 | 2017-06-21 | 株式会社ニコン | Display device |
JP2012205111A (en) * | 2011-03-25 | 2012-10-22 | Casio Comput Co Ltd | Imaging apparatus |
TW201322048A (en) * | 2011-11-25 | 2013-06-01 | Cheng-Xuan Wang | Field depth change detection system, receiving device, field depth change detecting and linking system |
JP5913934B2 (en) | 2011-11-30 | 2016-05-11 | キヤノン株式会社 | Image processing apparatus, image processing method and program, and imaging apparatus having image processing apparatus |
JP5871625B2 (en) * | 2012-01-13 | 2016-03-01 | キヤノン株式会社 | IMAGING DEVICE, ITS CONTROL METHOD, AND IMAGING SYSTEM |
CN103297677B (en) * | 2012-02-24 | 2016-07-06 | 卡西欧计算机株式会社 | Generate video generation device and the image generating method of reconstruct image |
JP5459337B2 (en) | 2012-03-21 | 2014-04-02 | カシオ計算機株式会社 | Imaging apparatus, image processing method, and program |
JP2013198016A (en) * | 2012-03-21 | 2013-09-30 | Casio Comput Co Ltd | Imaging apparatus |
JP5914192B2 (en) * | 2012-06-11 | 2016-05-11 | キヤノン株式会社 | Imaging apparatus and control method thereof |
US9398264B2 (en) | 2012-10-19 | 2016-07-19 | Qualcomm Incorporated | Multi-camera system using folded optics |
KR20140094395A (en) * | 2013-01-22 | 2014-07-30 | 삼성전자주식회사 | photographing device for taking a picture by a plurality of microlenses and method thereof |
KR20150006755A (en) * | 2013-07-09 | 2015-01-19 | 삼성전자주식회사 | Image generating apparatus, image generating method and non-transitory recordable medium |
CN103417181B (en) * | 2013-08-01 | 2015-12-09 | 北京航空航天大学 | A kind of endoscopic method for light field video camera |
US10178373B2 (en) | 2013-08-16 | 2019-01-08 | Qualcomm Incorporated | Stereo yaw correction using autofocus feedback |
JP6238657B2 (en) * | 2013-09-12 | 2017-11-29 | キヤノン株式会社 | Image processing apparatus and control method thereof |
US10010387B2 (en) | 2014-02-07 | 2018-07-03 | 3Shape A/S | Detecting tooth shade |
JP2015185998A (en) | 2014-03-24 | 2015-10-22 | 株式会社東芝 | Image processing device and imaging apparatus |
US9613417B2 (en) * | 2015-03-04 | 2017-04-04 | Ricoh Company, Ltd. | Calibration of plenoptic imaging systems using fourier transform |
CN106303208B (en) * | 2015-08-31 | 2019-05-21 | 北京智谷睿拓技术服务有限公司 | Image Acquisition control method and device |
CN106303210B (en) * | 2015-08-31 | 2019-07-12 | 北京智谷睿拓技术服务有限公司 | Image Acquisition control method and device |
CN106303209B (en) * | 2015-08-31 | 2019-06-21 | 北京智谷睿拓技术服务有限公司 | Image Acquisition control method and device |
CN108369338B (en) * | 2015-12-09 | 2021-01-12 | 快图有限公司 | Image acquisition system |
EP3182697A1 (en) * | 2015-12-15 | 2017-06-21 | Thomson Licensing | A method and apparatus for correcting vignetting effect caused on an image captured by lightfield cameras |
GB201602836D0 (en) * | 2016-02-18 | 2016-04-06 | Colordyne Ltd | Lighting device with directable beam |
EP3270589A1 (en) * | 2016-07-11 | 2018-01-17 | Thomson Licensing | An apparatus and a method for generating data representative of a pixel beam |
WO2018103819A1 (en) * | 2016-12-05 | 2018-06-14 | Photonic Sensors & Algorithms, S.L. | Microlens array |
CN109708193A (en) * | 2018-06-28 | 2019-05-03 | 永康市胜时电机有限公司 | Heating device inlet valve aperture control platform |
CN108868213B (en) * | 2018-08-20 | 2020-05-15 | 浙江大丰文体设施维保有限公司 | Stage disc immediate maintenance analysis mechanism |
EP3942345A1 (en) * | 2019-03-22 | 2022-01-26 | Università degli Studi di Bari "Aldo Moro" | Process and apparatus for the capture of plenoptic images between arbitrary planes |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5282045A (en) * | 1990-04-27 | 1994-01-25 | Hitachi, Ltd. | Depth-of-field control apparatus and image pickup apparatus having the same therein |
US5610390A (en) * | 1994-10-03 | 1997-03-11 | Fuji Photo Optical Co., Ltd. | Solid-state image pickup device having microlenses each with displaced optical axis |
US5757423A (en) * | 1993-10-22 | 1998-05-26 | Canon Kabushiki Kaisha | Image taking apparatus |
CN2394240Y (en) * | 1999-02-01 | 2000-08-30 | 王德胜 | TV image magnifier |
US6320979B1 (en) * | 1998-10-06 | 2001-11-20 | Canon Kabushiki Kaisha | Depth of field enhancement |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5672575A (en) * | 1979-11-19 | 1981-06-16 | Toshiba Corp | Picture input unit |
NO305728B1 (en) * | 1997-11-14 | 1999-07-12 | Reidar E Tangen | Optoelectronic camera and method of image formatting in the same |
-
2005
- 2005-09-30 CN CN2008101691410A patent/CN101426085B/en not_active Expired - Fee Related
- 2005-09-30 CN CNB200580039822XA patent/CN100556076C/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5282045A (en) * | 1990-04-27 | 1994-01-25 | Hitachi, Ltd. | Depth-of-field control apparatus and image pickup apparatus having the same therein |
US5757423A (en) * | 1993-10-22 | 1998-05-26 | Canon Kabushiki Kaisha | Image taking apparatus |
US5610390A (en) * | 1994-10-03 | 1997-03-11 | Fuji Photo Optical Co., Ltd. | Solid-state image pickup device having microlenses each with displaced optical axis |
US6320979B1 (en) * | 1998-10-06 | 2001-11-20 | Canon Kabushiki Kaisha | Depth of field enhancement |
CN2394240Y (en) * | 1999-02-01 | 2000-08-30 | 王德胜 | TV image magnifier |
Also Published As
Publication number | Publication date |
---|---|
CN101426085A (en) | 2009-05-06 |
CN101065955A (en) | 2007-10-31 |
CN100556076C (en) | 2009-10-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101426085B (en) | Imaging arrangements and methods therefor | |
US8953064B1 (en) | Imaging arrangements and methods therefor | |
CN201043890Y (en) | Optical imaging distance measuring device for single-aperture multiple imaging | |
US10021340B2 (en) | Method and an apparatus for generating data representative of a light field | |
US8243157B2 (en) | Correction of optical aberrations | |
US20130107085A1 (en) | Correction of Optical Aberrations | |
CN100538264C (en) | Optical imaging distance measuring device for single-aperture multiple imaging | |
US20150029386A1 (en) | Microlens array architecture for avoiding ghosting in projected images | |
CN108780574A (en) | Device and method for calibrating optical system for collecting | |
Wu et al. | Geometry based three-dimensional image processing method for electronic cluster eye | |
US20190101765A1 (en) | A method and an apparatus for generating data representative of a pixel beam | |
CN113115024B (en) | 3D information acquisition equipment | |
Neumann | Computer vision in the space of light rays: plenoptic video geometry and polydioptric camera design | |
Hua | 3D Lensless Imaging: Theory, Hardware, and Algorithms | |
Choi | Shape and image reconstruction from focus | |
Vaughan | Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data | |
Hong | Light field applications to 3-dimensional surface imaging | |
Georgiev et al. | Introduction to the JEI Focal Track Presentations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C53 | Correction of patent for invention or patent application | ||
CB02 | Change of applicant information |
Address after: American California Applicant after: The Board of Trustees of the Leland Stanford Junior University Address before: American California Applicant before: Univ Leland Stanford Junior |
|
COR | Change of bibliographic data |
Free format text: CORRECT: APPLICANT; FROM: UNIV LELAND STANFORD JUNIOR TO: THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY |
|
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20121003 Termination date: 20190930 |
|
CF01 | Termination of patent right due to non-payment of annual fee |