CN101426085B - Imaging arrangements and methods therefor - Google Patents
Imaging arrangements and methods therefor Download PDFInfo
- Publication number
- CN101426085B CN101426085B CN2008101691410A CN200810169141A CN101426085B CN 101426085 B CN101426085 B CN 101426085B CN 2008101691410 A CN2008101691410 A CN 2008101691410A CN 200810169141 A CN200810169141 A CN 200810169141A CN 101426085 B CN101426085 B CN 101426085B
- Authority
- CN
- China
- Prior art keywords
- light
- image
- array
- block
- rays
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 67
- 238000000034 method Methods 0.000 title description 111
- 230000003287 optical effect Effects 0.000 claims abstract description 11
- 230000004075 alteration Effects 0.000 claims description 19
- 238000009826 distribution Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 abstract description 11
- 238000013459 approach Methods 0.000 abstract description 10
- 238000012545 processing Methods 0.000 description 44
- 230000006870 function Effects 0.000 description 41
- 230000008569 process Effects 0.000 description 37
- 238000010586 diagram Methods 0.000 description 16
- 238000004364 calculation method Methods 0.000 description 11
- 238000001914 filtration Methods 0.000 description 8
- 238000013507 mapping Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000012952 Resampling Methods 0.000 description 5
- 238000003491 array Methods 0.000 description 5
- 238000003702 image correction Methods 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000007935 neutral effect Effects 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000000116 mitigating effect Effects 0.000 description 3
- 230000001902 propagating effect Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012634 optical imaging Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 206010010071 Coma Diseases 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 201000009310 astigmatism Diseases 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 125000001475 halogen functional group Chemical group 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Landscapes
- Studio Devices (AREA)
- Image Input (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
Abstract
Description
本发明专利申请是国际申请号为PCT/US2005/035189,国际申请日为2005年9月30日,进入中国国家阶段的申请号为200580039822.X,名称为“成像装置及其方法”的发明专利申请的分案申请。The patent application for this invention is an invention patent with the international application number PCT/US2005/035189, the international application date is September 30, 2005, the application number entering the Chinese national phase is 200580039822.X, and the name is "imaging device and its method" Divisional application of the application.
相关专利文件Related Patent Documents
本专利文件依照35U.S.C.§119(e),要求对2004年10月1日提交的序列号为60/615,179的美国临时专利申请、和2005年1月27日提交的序列号为60/647,492的美国临时专利申请的优先权,两者通过引用整体结合于此。This patent document claims pursuant to 35 U.S.C. §119(e) to U.S. Provisional Patent Applications Serial No. 60/615,179 filed October 1, 2004, and Serial No. 60/647,492 filed January 27, 2005 priority of the U.S. Provisional Patent Application of , both of which are hereby incorporated by reference in their entirety.
技术领域 technical field
本发明一般涉及成像应用,尤其涉及处理图像数据以聚焦和/或校正图像数据。The present invention relates generally to imaging applications, and more particularly to processing image data to focus and/or correct the image data.
背景技术 Background technique
诸如涉及相机、摄像机、显微镜、望远镜等的成像应用通常受限于所采集的光的量。即,大多数成像装置不记录关于进入该装置的光分布的大部分信息。例如,诸如数码静态相机和摄像机的传统相机不记录关于从外界进入的光分布的大部分信息。在这些装置中,所采集的光常常不能用各种方法处理,诸如聚焦在不同深度(距成像装置的距离)、校正透镜像差或控制视角。Imaging applications such as those involving cameras, video cameras, microscopes, telescopes, etc. are generally limited by the amount of light collected. That is, most imaging devices do not record most information about the distribution of light entering the device. For example, conventional cameras such as digital still cameras and video cameras do not record most information on the distribution of light entering from the outside. In these devices, the collected light often cannot be manipulated in various ways, such as focusing at different depths (distances from the imaging device), correcting for lens aberrations, or controlling the viewing angle.
对于静态成像应用,捕获特定场景的典型成像装置通常聚焦在场景中的目标或对象,而该场景的其它部分落在焦点之外。对于视频成像应用,存在类似的问题,其中视频应用中使用的图像采集不能焦点对准地捕获场景。For still imaging applications, a typical imaging device capturing a particular scene is typically focused on a target or object in the scene, while other portions of the scene fall out of focus. A similar problem exists for video imaging applications, where the image acquisition used in the video application cannot capture the scene in focus.
许多成像应用遭受用于采集光的器件(透镜)的像差。这种像差可包括例如球面像差、色差、畸变、光场弯曲、斜像散、以及彗形像差。对像差的校正一般涉及使用校正光学部件,此时易于向成像装置添加体积、支出和重量。在诸如相机电话和安全相机的一些得益于小型光学部件的应用中,与应用相关联的物理限制不期望包括附加光学部件。Many imaging applications suffer from aberrations of the devices (lenses) used to collect light. Such aberrations may include, for example, spherical aberration, chromatic aberration, distortion, curvature of field, oblique astigmatism, and coma. Correction for aberrations generally involves the use of corrective optics, which tend to add bulk, expense and weight to the imaging device. In some applications, such as camera phones and security cameras, that benefit from small optics, the physical constraints associated with the application do not make it desirable to include additional optics.
与以上相关联的困难已对包括涉及数字图像的获取和改变的图像应用提出了挑战。The difficulties associated with the above have presented challenges to image applications, including those involving the acquisition and alteration of digital images.
发明内容 Contents of the invention
本发明涉及克服与成像装置及其实现相关的上述挑战和其它挑战。本发明在许多实现和应用中例示,其中一部分将在下文中概述。The present invention is directed to overcoming the above-mentioned challenges and others associated with imaging devices and their implementation. The invention is exemplified in many implementations and applications, some of which are outlined below.
根据本发明的一示例实施方式,用表征被检测光的方向信息来检测该光。对应于再聚焦图像和经校正图像之一或两者,方向信息与被检测光一起用来产生虚像。According to an example embodiment of the invention, the light is detected with directional information characterizing the light being detected. The direction information is used with the detected light to generate a virtual image corresponding to one or both of the refocused image and the corrected image.
根据本发明的另一示例实施方式,对在场景中不同焦深的两个或多个目标进行成像,其中将场景的对应于各个目标的部分成像在不同的焦平面。来自场景的光在一物理焦平面上聚焦并与表征方向的信息一起被检测,其中该光从该方向到达物理焦平面上的特定位置。对于位于未聚焦在物理焦平面上的景深的至少一个目标,确定与物理焦平面不同的虚焦平面。使用被检测光及其方向特性,光的对应于虚焦平面上至少一个目标的聚焦图像的部分被采集并累加以形成该至少一个目标的虚聚焦图像。According to another example embodiment of the present invention, two or more objects at different depths of focus in a scene are imaged, wherein portions of the scene corresponding to the respective objects are imaged at different focal planes. Light from a scene is focused on a physical focal plane and detected together with information characterizing the direction from which the light reaches a specific location on the physical focal plane. For at least one object at a depth of field that is not in focus on the physical focal plane, a virtual focal plane is determined that is different from the physical focal plane. Using the detected light and its directional properties, the portion of the light corresponding to the in-focus image of the at least one object on the virtual focal plane is acquired and accumulated to form a virtual in-focus image of the at least one object.
根据本发明再一示例实施方式,对场景进行数字成像。对传输到焦平面上不同位置的来自场景的光进行检测,并对在焦平面上不同位置检测到的光的入射角进行检测。被检测光所源自的场景部分的景深被检测,并与所确定的入射角一起用于对被检测光进行数字化重新排列。取决于应用,重新排列包括再聚焦和/或校正透镜像差。According to yet another example embodiment of the present invention, the scene is digitally imaged. The light from the scene transmitted to different locations on the focal plane is detected, and the angle of incidence of the light detected at different locations on the focal plane is detected. The depth of field of the portion of the scene from which the detected light originates is detected and used together with the determined angle of incidence to digitally rearrange the detected light. Depending on the application, realignment includes refocusing and/or correcting lens aberrations.
本发明的以上归纳并不旨在描述本发明的各个示出实施方式或每个实现。下文的附图和详细描述更具体地例示了这些实施方式。The above summary of the present invention is not intended to describe each illustrated embodiment, or every implementation, of the present invention. The Figures and Detailed Description that follow more particularly exemplify these embodiments.
附图说明 Description of drawings
考虑到下文中与附图相关的本发明的各个实施方式的详细描述,可更透彻地理解本发明,在附图中:A more complete understanding of the invention may be obtained by considering the following detailed description of various embodiments of the invention in connection with the accompanying drawings, in which:
图1是根据本发明一示例实施方式的光线捕捉和处理装置;Figure 1 is a light capture and processing device according to an example embodiment of the invention;
图2是根据本发明另一示例实施方式的光学成像装置;FIG. 2 is an optical imaging device according to another example embodiment of the present invention;
图3是根据本发明又一示例实施方式的图像处理的过程流程图;3 is a process flow diagram of image processing according to yet another exemplary embodiment of the present invention;
图4是根据本发明另一示例实施方式的用于产生预览图像的过程流程图;FIG. 4 is a process flow diagram for generating a preview image according to another example embodiment of the present invention;
图5是根据本发明又一示例实施方式的用于处理和压缩图像数据的过程流程图;5 is a process flow diagram for processing and compressing image data according to yet another example embodiment of the present invention;
图6是根据本发明另一示例实施方式的用于图像合成的过程流程图;FIG. 6 is a process flow diagram for image synthesis according to another example embodiment of the present invention;
图7是根据本发明又一示例实施方式的用于图像再聚焦的过程流程图;FIG. 7 is a process flow diagram for image refocusing according to yet another example embodiment of the present invention;
图8是根据本发明另一示例实施方式的用于延伸图像中景深的过程流程图;8 is a flow diagram of a process for extending depth of field in an image according to another example embodiment of the invention;
图9是根据本发明又一示例实施方式的用于延伸图像中景深的另一方法的过程流程图;9 is a process flow diagram of another method for extending depth of field in an image according to yet another example embodiment of the present invention;
图10示出根据本发明另一示例实施方式的分离光线的一个示例方法;Figure 10 illustrates an example method of splitting rays according to another example embodiment of the present invention;
图11示出根据本发明又一示例实施方式的将传感器像素位置映射到相对于所采集数据的L(u,v,s,t)空间中光线的方法;11 illustrates a method of mapping sensor pixel positions to light rays in L(u, v, s, t) space with respect to acquired data, according to yet another example embodiment of the present invention;
图12示出根据本发明另一示例实施方式的再聚焦在不同深度的若干图像;Figure 12 shows several images refocused at different depths according to another example embodiment of the invention;
图13A示出根据本发明又一示例实施方式的2D成像配置;Figure 13A shows a 2D imaging configuration according to yet another example embodiment of the invention;
图13B示出根据本发明另一示例实施方式的总计成一个像素的来自一个3D点的光线锥;FIG. 13B illustrates cones of rays from one 3D point totaling one pixel, according to another example embodiment of the invention;
图14A-14C示出根据本发明又一示例实施方式的计算具有不同景深的图像的方法;14A-14C illustrate a method of calculating images with different depths of field according to yet another example embodiment of the present invention;
图15示出根据本发明另一示例实施方式的跟踪来自虚胶片平面上的一个3D点的光线的方法;15 illustrates a method of tracing rays from a 3D point on a virtual film plane according to another example embodiment of the present invention;
图16示出根据本发明又一示例实施方式的得到光的值的方法;FIG. 16 illustrates a method of obtaining light values according to yet another exemplary embodiment of the present invention;
图17A示出根据本发明另一示例实施方式的理想512x512相片;Figure 17A shows an idealized 512x512 photo according to another example embodiment of the invention;
图17B示出根据本发明又一示例实施方式的用f/2双凸球面透镜产生的图像;Figure 17B shows an image produced with an f/2 bi-convex spherical lens according to yet another example embodiment of the present invention;
图17C示出根据本发明另一示例实施方式的使用图像校正方法计算的图像;17C shows an image calculated using an image correction method according to another example embodiment of the present invention;
图18A-18C示出根据本发明又一示例实施方式的跟踪彩色成像系统中共用的光线;Figures 18A-18C illustrate tracking common rays in a color imaging system according to yet another example embodiment of the present invention;
图19A-19F示出根据本发明另一示例实施方式的实现镶嵌阵列的方法;19A-19F illustrate a method of implementing a mosaic array according to another example embodiment of the present invention;
图20是根据本发明又一示例实施方式在傅立叶域中再聚焦的计算方法的过程流程图;20 is a process flow diagram of a calculation method for refocusing in the Fourier domain according to yet another exemplary embodiment of the present invention;
图21A示出根据本发明另一示例实施方式的三角滤波方法;FIG. 21A shows a triangular filtering method according to another example embodiment of the present invention;
图21B示出根据本发明又一示例实施方式的三角滤波方法的傅立叶变换;Fig. 21 B shows the Fourier transform of the triangular filtering method according to yet another example embodiment of the present invention;
图22是根据本发明另一示例实施方式的在频域中再聚焦的方法的流程图;22 is a flowchart of a method for refocusing in the frequency domain according to another exemplary embodiment of the present invention;
图23示出根据本发明又一示例实施方式的通过期望焦点的光线集合;Figure 23 illustrates a collection of rays passing through a desired focal point according to yet another example embodiment of the present invention;
图24A-B示出根据本发明另一示例实施方式的微透镜阵列的一部分的不同视图;24A-B show different views of a portion of a microlens array according to another example embodiment of the invention;
图24C示出根据本发明又一示例实施方式的在光传感器上出现的图像;Figure 24C shows an image appearing on a light sensor according to yet another example embodiment of the invention;
图25示出本发明一示例实施方式,其中虚像被计算成就像它出现在虚胶片上;Figure 25 illustrates an example embodiment of the invention, wherein a virtual image is computed as if it appeared on a virtual film;
图26示出根据本发明另一示例实施方式的操纵虚透镜平面的方法;FIG. 26 illustrates a method of manipulating a virtual lens plane according to another example embodiment of the present invention;
图27示出根据本发明又一示例实施方式虚胶片可采用任何形状;Figure 27 shows that a virtual film can take any shape according to yet another example embodiment of the present invention;
图28示出根据本发明另一示例实施方式的成像装置;FIG. 28 illustrates an imaging device according to another example embodiment of the present invention;
图29是根据本发明又一示例实施方式的预计算与各个输出图像像素和各个光线传感器值相关联的权重数据库的过程流程图;29 is a process flow diagram for precomputing a database of weights associated with individual output image pixels and individual light sensor values in accordance with yet another example embodiment of the present invention;
图30是根据本发明另一示例实施方式的使用权重数据库计算输出图像的过程流程图;30 is a flowchart of a process for calculating an output image using a weight database according to another example embodiment of the present invention;
图31A-D示出根据本发明又一示例实施方式的作为虚光圈函数选择性实现的各种标量函数;31A-D illustrate various scalar functions selectively implemented as virtual aperture functions according to yet another example embodiment of the invention;
图32示出根据本发明另一示例实施方式的逐个像素变化的虚光圈函数;FIG. 32 shows a virtual aperture function varying pixel by pixel according to another example embodiment of the present invention;
图33是根据本发明又一示例实施方式的用户选择输出图像的一个区域、编辑一图像部分并保存输出图像的过程流程图;33 is a process flow diagram of a user selecting an area of an output image, editing an image portion, and saving the output image according to yet another example embodiment of the present invention;
图34是根据本发明另一示例实施方式的用于延伸图像中景深的流程图;以及34 is a flowchart for extending depth of field in an image according to another example embodiment of the invention; and
图35是根据本发明又一示例实施方式的用于根据接收到的光线传感器数据计算经再聚焦图像的过程流程图。35 is a process flow diagram for computing a refocused image from received light sensor data according to yet another example embodiment of the invention.
虽然本发明易于具有各种变体和其它形式,但是其细节通过示例在附图中示出并将详细描述。然而,应该理解,并非旨在将本发明限制于所述的特定实施方式。相反,本发明涵盖落在本发明的精神和范围内的所有变体、等效方案和替换方案。While the invention is susceptible to various modifications and other forms, details thereof are shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the invention to the particular embodiments described. On the contrary, the invention covers all modifications, equivalents and alternatives falling within the spirit and scope of the invention.
详细描述A detailed description
相信本发明对各种不同类型的装置有用,且发现本发明尤其适用于电子成像装置和应用。虽然本发明不必受限于这种应用,但是通过使用上下文讨论各种示例,可以理解本发明的各个方面。The present invention is believed to be useful in a variety of different types of devices, and it has been found to be particularly useful in electro-imaging devices and applications. While the invention is not necessarily limited to this application, various aspects of the invention can be appreciated by discussing various examples using the context.
根据本发明的一示例实施方式,使用涉及确定到达位于焦平面的传感器的光的量和方向的方法来检测四维(4D)光场(例如沿诸如自由空间的区域中每条光线传播)。对焦平面中光的二维位置与表征方向的信息一起进行检测,其中该光从该方向到达平面上的特定位置。使用该方法,到达传感器上不同位置的定向照明分布被确定并用于形成图像。在本文的各种讨论中,实现用于感测和/或测量光场的组件或多个组件被称为“光线传感器”或“射线传感器”。According to an example embodiment of the present invention, a four-dimensional (4D) light field (eg, along each ray propagating in a region such as free space) is detected using a method involving determining the amount and direction of light reaching a sensor located at a focal plane. The two-dimensional position of the light in the focal plane is detected together with information characterizing the direction from which the light reaches a specific position on the plane. Using this method, the directional illumination distribution reaching different locations on the sensor is determined and used to form an image. In various discussions herein, a component or components implemented for sensing and/or measuring a light field are referred to as "light sensors" or "ray sensors".
在一应用中,使用具有对入射到成像平面上的光线空间进行采样的光学部件和传感器的成像系统来实现与上文类似的方法,该成像系统具有以不同方法从测量光线集合呈现图像的计算功能。取决于实现,组合或单独地使用不同的方法来实现光学部件、传感器和计算功能。例如,具有将图像聚焦在位于成像平面的光传感器阵列(多个传感器)上的透镜(光学部件)的相机可用于对光线空间进行采样。从光传感器阵列的输出与计算功能(例如在相机内部和/或外部的处理器上)一起用来呈现图像,诸如通过计算聚焦在不同深度或具有不同景深的相片,和/或通过计算校正透镜像差以产生更高质量的图像。In one application, a method similar to the above is implemented using an imaging system with optics and sensors that spatially sample the rays incident on the imaging plane, with computations that render images from sets of measured rays in different ways Function. Depending on the implementation, different approaches are used in combination or separately to implement optics, sensors, and computing functions. For example, a camera with a lens (optics) that focuses an image on an array of light sensors (multiple sensors) located at the imaging plane can be used to sample the ray space. The output from the photosensor array is used with computational functions (e.g., on a processor internal to and/or external to the camera) to render images, such as by computationally focusing on photographs at different depths or with different depths of field, and/or by computationally correcting for perspective. Mirroring difference to produce a higher quality image.
在另一示例实施方式中,成像系统的光学部件和传感器部件将光线传导在传感器元件上,使各个传感器元件感测包括从特定方向发散的光线的光线集合。在许多应用中,该光线集合是在空间和方向上局域化的一束光线。对于许多应用,该束光线随着光学部件和传感器分辨率增加而汇聚成单个几何光线。这样,本文中的不同描述部分将由传感器元件感测的值称为“光射线”或“光线”或简单地“射线”,即使通常它们并不局限于几何光线。In another example embodiment, the optics and sensor components of the imaging system direct light onto the sensor elements such that each sensor element senses a set of light that includes light diverging from a particular direction. In many applications, this collection of rays is a bundle of rays localized in space and direction. For many applications, this beam of light is converged into a single geometric ray as the optics and sensor resolution increase. Thus, the various descriptive parts herein refer to the values sensed by the sensor elements as "rays of light" or "rays" or simply "rays", even though in general they are not limited to geometrical rays.
现在参照附图,图28示出根据本发明另一示例实施方式的成像装置2890。该成像装置2890包括主透镜2810,以及对到达传感器上不同位置并来自不同入射方向的光的值进行测量的光线传感器。在本文的上下文中,可通过检测到达传感器上不同位置的光以及诸如强度的光的特性来产生数值,以实现对光的值的测量。Referring now to the drawings, FIG. 28 illustrates an
图1示出根据本发明另一实施方式的成像系统100。该成像系统100包括具有主透镜110、微透镜阵列120和光传感器阵列130的成像装置190。在该情形中,微透镜阵列120和光传感器阵列130实现一光线传感器。虽然图1示出一特定主透镜110(单个元件)和特定微透镜阵列120,但是本领域技术人员应该意识到:通过例如替换所示出的主透镜和/或微透镜阵列,各种透镜和/或微透镜阵列(目前已有或即将开发)可用类似的方法选择性地实现。FIG. 1 shows an imaging system 100 according to another embodiment of the present invention. The imaging system 100 includes an imaging device 190 having a main lens 110 , a microlens array 120 and a photosensor array 130 . In this case, the microlens array 120 and the photosensor array 130 implement a light sensor. Although FIG. 1 shows a particular main lens 110 (single element) and a particular microlens array 120, those skilled in the art will appreciate that various lenses and And/or microlens arrays (currently available or to be developed) can optionally be implemented in a similar manner.
来自成像场景中对象105上单个点的光线可到达微透镜阵列120的焦平面上的单个聚焦点。例如,当对象105上成像点在距主透镜与从微透镜阵列到主透镜的距离共轭的距离处时,如图所示距离“d”约等于距离“s”。在该汇聚点上的微透镜122将这些光线根据光的方向分开,从而在微透镜下的光传感器上产生主透镜110的光圈的聚焦图像。Rays from a single point on object 105 in the imaged scene may reach a single focal point on the focal plane of microlens array 120 . For example, when the imaging point on object 105 is at a distance from the main lens conjugate to the distance from the microlens array to the main lens, distance "d" is approximately equal to distance "s" as shown. A microlens 122 at the converging point splits these rays according to the direction of the light, producing a focused image of the aperture of the main lens 110 on a light sensor beneath the microlens.
光传感器阵列130对入射到其上的光进行检测并产生通过一个或多个不同部件处理的输出。在本申请中,输出光数据被传递到传感器数据处理电路140,该电路使用该数据以及关于提供数据的各个光传感器的位置信息来产生场景(例如包括对象105、106和107)的图像。传感器数据处理电路140用例如计算机或在共用部件(例如芯片)或不同部件中实现的其它处理电路实现。在一实现中,传感器数据处理电路140的一部分在成像装置190中实现,而另一部分在外部计算机中实现。使用检测光(以及例如检测光的特性)以及到达微透镜阵列的光的已知入射方向(如使用各个光传感器的已知位置计算的),在形成图像时传感器数据处理电路140选择性地再聚焦和/或校正光数据(其中再成像可校正)。以下参照或不参照其它附图详细描述处理被检测光数据的各种方法。这些方法可用与以上一致的传感器数据处理电路140选择性地实现。Photosensor array 130 detects light incident thereon and produces an output that is processed by one or more different components. In the present application, the output light data is passed to sensor data processing circuitry 140 which uses this data along with positional information about the various light sensors providing the data to generate an image of the scene (eg including objects 105, 106 and 107). The sensor data processing circuit 140 is implemented with, for example, a computer or other processing circuit implemented in a common component (eg, a chip) or in a different component. In one implementation, a portion of sensor data processing circuitry 140 is implemented in imaging device 190 and another portion is implemented in an external computer. Using the detected light (and, for example, the characteristics of the detected light) and the known direction of incidence of the light reaching the microlens array (as calculated using the known positions of the individual photosensors), the sensor data processing circuitry 140 selectively reproduces the Focusing and/or correcting light data (where reimaging can be corrected). Various methods of processing detected light data are described in detail below with or without reference to other figures. These methods may optionally be implemented with sensor data processing circuitry 140 consistent with the above.
成像系统100的不同部分取决于特定应用而在共用或单独的物理装置中选择性地实现。例如,当用各种应用实现时,微透镜阵列120和光传感器阵列130可组合成一共用装置180。在一些应用中,微透镜阵列120和光传感器阵列130在共用芯片或其它电路装置上耦合在一起。当用诸如类似相机装置的手持式装置实现时,主透镜110、微透镜阵列120和光传感器阵列130选择性地组合成与手持式装置集成的共用成像装置190中。此外,某些应用涉及在具有光传感器阵列130的共用电路装置中(例如在共用芯片上)实现传感器数据处理电路140的部分或全部。Different parts of imaging system 100 are selectively implemented in common or separate physical devices depending on the particular application. For example, microlens array 120 and photosensor array 130 may be combined into a common device 180 when implemented with various applications. In some applications, microlens array 120 and photosensor array 130 are coupled together on a common chip or other circuitry. When implemented with a handheld device such as a camera-like device, the main lens 110, microlens array 120 and photosensor array 130 are selectively combined into a common imaging device 190 integrated with the handheld device. Furthermore, certain applications involve implementing some or all of the sensor data processing circuitry 140 in a common circuit arrangement (eg, on a common chip) with the photosensor array 130 .
在一些应用中,成像装置100包括用于向捕捉图像的用户呈现预览图像的预览装置150。该预览装置被通信耦合,以接收来自光传感器阵列130的图像数据。预览处理器160处理该图像数据以产生在预览屏幕170上显示的预览图像。在一些应用中,预览处理器160与图像传感器180一起在共用芯片上和/或共用电路中实现。在如上所述传感器数据处理电路140与光传感器阵列130一起实现的应用中,预览处理器160用传感器数据处理电路140选择性地实现,其中光传感器阵列130采集的图像数据的部分或全部用于产生预览图像。In some applications, imaging device 100 includes preview device 150 for presenting a preview image to a user capturing the image. The preview device is communicatively coupled to receive image data from photosensor array 130 . The preview processor 160 processes the image data to generate a preview image displayed on the preview screen 170 . In some applications, preview processor 160 is implemented on a common chip and/or in common circuitry with image sensor 180 . In the application where the sensor data processing circuit 140 is implemented together with the photosensor array 130 as described above, the preview processor 160 is selectively implemented with the sensor data processing circuit 140, wherein part or all of the image data collected by the photosensor array 130 is used for Generate a preview image.
使用与用于产生最终图像的相比相对较少的计算功能和/或更少的数据可产生预览图像。例如,当用诸如相机或手机的手持式成像装置实现时,不实现任何聚焦或透镜校正的预览图像就足够。这样,期望实现相对便宜和/或较小的处理电路以产生预览图像。在这些应用中,预览处理器通过例如使用上述第一延伸景深计算方法,以相对较低的计算成本和/或使用更少的数据来产生图像。The preview image may be generated using relatively less computational power and/or less data than was used to generate the final image. For example, when implemented with a hand-held imaging device such as a camera or cell phone, a preview image without any focus or lens correction is sufficient. As such, it is desirable to implement relatively inexpensive and/or small processing circuitry to generate the preview image. In these applications, the preview processor generates images at relatively low computational cost and/or using less data, for example by using the first extended depth-of-field calculation method described above.
取决于应用,成像系统100可用各种方式实现。例如,虽然微透镜阵列120作为示例被示为具有若干可分辨微透镜,但是该阵列通常使用多个(例如几千、几百万)微透镜实现。光传感器阵列130通常包括比微透镜阵列120相对更精细的间距,其中对于微透镜阵列120中的各个微透镜存在若干光传感器。此外,微透镜阵列120中的微透镜和光传感器阵列130中的光传感器通常定位成通过各个微透镜传播到光传感器阵列的光不与传播通过相邻微透镜的光交叠。Depending on the application, imaging system 100 can be implemented in various ways. For example, while microlens array 120 is shown as an example with several resolvable microlenses, such arrays are typically implemented using multiple (eg, thousands, millions) of microlenses. Photosensor array 130 typically includes a relatively finer pitch than microlens array 120 , where there are several photosensors for each microlens in microlens array 120 . Furthermore, the microlenses in microlens array 120 and the photosensors in photosensor array 130 are generally positioned such that light propagating through each microlens to the photosensor array does not overlap light propagating through adjacent microlenses.
在不同应用中,主透镜110沿其光轴平移(如图1所示在水平方向),以聚焦在位于由主透镜和示例成像目标105之间例示的期望深度“d”的感兴趣目标上。通过示例,为讨论目的示出来自目标105上单个点的光线。这些光线传输到微透镜阵列120焦平面上的微透镜122处的单个汇聚点。微透镜122根据方向分开这些光线,从而在微透镜下的像素阵列中像素集132上产生主透镜110光圈的聚焦图像。图10示出一示例方法,该方法分开光线使得从主透镜1010上的一个点发散并到达同一微透镜(例如1022)的表面上任何位置的所有光线由该微透镜传导以汇聚在光传感器(例如1023)上的同一点。图10中示出的该方法可例如与图1相关地实现(即用主透镜1010实现主透镜110、用微透镜阵列1020实现微透镜阵列120、以及用光传感器阵列1030实现光传感器阵列130)。In a different application, the main lens 110 is translated along its optical axis (horizontally as shown in FIG. . By way of example, a ray from a single point on target 105 is shown for discussion purposes. These rays travel to a single converging point at microlenses 122 on the focal plane of microlens array 120 . The microlenses 122 split these rays according to direction, thereby producing a focused image of the aperture of the main lens 110 on a set of pixels 132 in the pixel array beneath the microlenses. 10 illustrates an example method of splitting light rays such that all light rays diverging from a point on the main lens 1010 and arriving anywhere on the surface of the same microlens (e.g., 1022) are conducted by the microlens to converge at the light sensor ( For example the same point on 1023). The method shown in FIG. 10 can be implemented, for example, in relation to FIG. 1 (i.e. main lens 110 with main lens 1010, microlens array 120 with microlens array 1020, and photosensor array 130 with photosensor array 1030) .
在微透镜阵列122中特定微透镜下形成的图像表示系统对于成像平面上该位置的方向分辨率。在一些应用中,通过使微透镜聚焦在主透镜的主平面而有助于锐化微透镜图像来增强方向分辨率。在某些应用中,微透镜比微透镜阵列与主透镜110之间的间隔至少小两个量级。在这些应用中,主透镜110在微透镜的光学无限距离(optical infinity)处有效;为了聚焦微透镜,将光传感器阵列130置于位于微透镜焦深处的平面内。The image formed under a particular microlens in microlens array 122 represents the directional resolution of the system for that location on the imaging plane. In some applications, directional resolution is enhanced by helping to sharpen the microlens image by focusing the microlens on the principal plane of the primary lens. In some applications, the microlenses are at least two orders of magnitude smaller than the spacing between the microlens array and the main lens 110 . In these applications, the main lens 110 is effective at the optical infinity of the microlenses; to focus the microlenses, the photosensor array 130 is placed in a plane at the focal depth of the microlenses.
主透镜110与微透镜阵列120之间的间隔“s”选择成在微透镜景深内实现锐化图像。在许多应用中,该间隔精确到约Δxp·(fm/Δxm)内,其中Δxp是传感器像素的宽度,fm是微透镜的焦深且Δxm是微透镜的宽度。在一特定应用中,Δxp为约9微米,fm为约500微米且Δxm为约125微米,且微透镜阵列120与光传感器阵列130之间的间隔精确到约36微米。The separation "s" between the main lens 110 and the microlens array 120 is selected to achieve a sharpened image within the depth of field of the microlenses. In many applications, this spacing is accurate to within about Δx p ·(f m /Δx m ), where Δx p is the width of the sensor pixel, f m is the focal depth of the microlens and Δx m is the width of the microlens. In one particular application, Δx p is about 9 microns, f m is about 500 microns and Δx m is about 125 microns, and the spacing between microlens array 120 and photosensor array 130 is exactly about 36 microns.
使用各个微透镜的一个或多个及其配置实现微透镜阵列120。在一示例实施方式中,具有潜在空间变化性质的微透镜平面被用作微透镜阵列120。例如,微透镜阵列可包括均匀和/或不均匀、正方形外延或非正方形外延、有规律分布或无规律分布、以及在可重复或不可重复的图案内的透镜,以及可任选地遮挡的部分。微透镜自身可以是凸面、非凸面,或者具有任意外形以实现光的期望物理方向,且外形可随平面上的微透镜逐个地变化。可选择性地组合不同的分布和透镜外形。这些不同实施方式提供在阵列的一些区域空间较高(相应地较低的角度)、而在其它区域角度较高(相应地较低的空间)采样图案。这种数据的一种用途是便于插值以匹配4D空间中的期望空间和角度分辨率。Microlens array 120 is implemented using one or more of individual microlenses and configurations thereof. In an example embodiment, a microlens plane with potentially spatially varying properties is used as the microlens array 120 . For example, microlens arrays may include lenses that are uniform and/or non-uniform, square or non-square in extension, regularly distributed or randomly distributed, and in repeatable or non-repeatable patterns, and optionally masked portions . The microlenses themselves can be convex, non-convex, or have any shape to achieve the desired physical direction of light, and the shape can vary from one microlens to another in a plane. Different distributions and lens shapes can be combined optionally. These various embodiments provide sampling patterns that are spatially higher (respectively lower angles) in some areas of the array and higher angle (respectively lower spatial) in other areas. One use of such data is to facilitate interpolation to match the desired spatial and angular resolution in 4D space.
图24A示出根据本发明另一示例实施方式的微透镜阵列的一部分的视图(视线垂直于平面)。微透镜是正方形的并有规律地分布在阵列中。Figure 24A shows a view (line of sight perpendicular to plane) of a portion of a microlens array according to another example embodiment of the present invention. The microlenses are square and regularly distributed in the array.
图24B示出根据本发明另一示例实施方式的微透镜阵列的一部分的视图。微透镜平面分布是没有规律或不重复的,且微透镜为任意形状。FIG. 24B shows a view of a portion of a microlens array according to another example embodiment of the present invention. The plane distribution of the microlenses is irregular or non-repetitive, and the microlenses are of arbitrary shape.
图24C示出与本发明的另一示例实施方式相关的、通过使用如图24A所示具有凸面外形的分布和具有圆形光圈的主透镜,在光传感器上出现的图像。Fig. 24C shows an image appearing on a photosensor by using a profile with a convex profile as shown in Fig. 24A and a main lens with a circular aperture, related to another example embodiment of the present invention.
在其它示例实施方式中,使用较大和较小的微透镜的常规镶嵌。在一实现中,对得到的光传感器数据进行插值以提供具有镶嵌中一个或多个微透镜的最大空间和角度分辨率的均匀采样。In other example embodiments, a conventional mosaic of larger and smaller microlenses is used. In one implementation, the resulting light sensor data is interpolated to provide uniform sampling with maximum spatial and angular resolution of the one or more microlenses in the mosaic.
图19A-19F示出与本发明的一个或多个示例实施方式相关地、实现诸如上述镶嵌阵列的方法。图19A是示出多个微透镜的示例相关尺寸和排列的俯视平面图。图19B是通过图19A中的各个微透镜投射之后在光传感器阵列上形成的图像形状的视图。图19C是图19A中阵列的横截面图,示出了微透镜具有相同的光圈值(f-number)且其焦点在同一平面上。这需要较小的微透镜比较大的微透镜放置成更靠近焦平面。这导致出现在各个微透镜后的主透镜图像较大却无重叠,且聚焦在置于包含焦点的平面处的光传感器上。Figures 19A-19F illustrate a method of implementing a mosaic array such as that described above in connection with one or more example embodiments of the present invention. FIG. 19A is a top plan view showing example relative sizes and arrangements of a plurality of microlenses. Figure 19B is a view of the shape of the image formed on the photosensor array after projection through the various microlenses in Figure 19A. Figure 19C is a cross-sectional view of the array in Figure 19A, showing the microlenses having the same f-number and their focal points on the same plane. This requires smaller microlenses to be placed closer to the focal plane than larger microlenses. This results in a large, non-overlapping main lens image appearing behind each microlens and focused on the light sensor placed at the plane containing the focal point.
图19D示出根据本发明的另一示例实施方式的在包含主透镜1910、镶嵌微透镜阵列1920和光传感器装置1930的完整成像装置中实现的图19A和19C中所示微透镜的横截面图。注意,虽然附图示为有若干个微透镜以及每个微透镜有若干像素,但是微透镜和像素的实际数目可用不同方法选择,诸如通过确定给定应用的分辨率要求并实现每个的适当数目。Figure 19D shows a cross-sectional view of the microlenses shown in Figures 19A and 19C implemented in a complete imaging device comprising a
图19E是表示从主透镜1910上的u开始、并在微透镜阵列1920上的s结束的光线空间的笛卡尔光线图(虽然光线空间是4D的,但是为清楚起见,光线空间被示为2D)。由图19D所示的各个光传感器(标识为A-P)求和的光线集合在图19E中的笛卡尔光线图示出。在完全的4D光线空间中,各个光传感器结合4D光线盒。与较小的微透镜下的光传感器相比,较大的微透镜下的光传感器4D盒在(u,v)方向轴具有一半的宽度(两倍的分辨率),而在(x,y)空间轴具有两倍的宽度倍的宽度(一半的分辨率)。Figure 19E is a Cartesian ray diagram representing the ray space starting at u on the
在另一示例实施方式中,光传感器值被插值到规则网格中,使所有轴中的分辨率与所有轴中的最大分辨率匹配。图19F示出这种方法,其中通过插值附近盒值对表示各个光传感器值的光线盒进行分割。在所示的2D光线空间中,每个盒被分成两个,但是在4D空间中,每个盒被分成四个(沿其两条较长边的每条一分为二)。在一些实施方式中,经插值的值通过分析邻近值来计算。在另一实施方式中,插值实现为期望值邻域的初始、未分割盒的值的加权和。In another example embodiment, the light sensor values are interpolated into a regular grid such that the resolution in all axes matches the maximum resolution in all axes. Figure 19F illustrates this approach, where the ray boxes representing individual light sensor values are segmented by interpolating nearby box values. In the 2D ray space shown, each box is divided into two, but in 4D space, each box is divided into four (one along each of its two longer sides). In some implementations, interpolated values are calculated by analyzing neighboring values. In another embodiment, the interpolation is implemented as a weighted sum of the values of the initial, undivided boxes of the neighborhood of the expected value.
在一些应用中,以取决于基于邻域中值的判定函数的方式实现加权。例如,加权可沿在4D函数空间中最不可能包含边缘的轴插值。该值附近的边缘的可能性可根据在这些位置处函数值的梯度大小以及该函数的拉普拉斯算子成分估计。In some applications, weighting is achieved in a manner dependent on a decision function based on neighborhood medians. For example, the weights may be interpolated along the axes least likely to contain edges in the 4D function space. The likelihood of edges near this value can be estimated from the magnitude of the gradient of the function's values at these locations and the Laplacian component of the function.
在另一示例实施方式中,各个微透镜(例如在图19D中的阵列1920或类似的中)向内倾斜使它们的光轴都以主透镜的光圈为中心。该方法减小了在微透镜下向阵列的边缘形成的图像中的像差。In another example embodiment, individual microlenses (eg, in
再次参照图1,主透镜110和微透镜阵列120中微透镜的光圈尺寸(例如透镜中开口的有效尺寸)也选择成适于其中实现成像配置100的特殊应用。在许多应用中,相对光圈尺寸被选择成所采集的图像在不重叠的情况下尽可能地大(即,从而光不会不合需要地重叠在相邻光传感器上)。该方法通过匹配主透镜和微透镜的光圈值(聚焦比数,即透镜的光圈与有效焦距的比值)。在此情形中,以光圈值表示的主透镜110的有效焦距是主透镜光圈的直径与主透镜110和微透镜阵列120之间的距离“s”之间的比值。在其中主透镜110的主平面相对于微透镜阵列120所在的平面平移的应用中,选择性地更改主透镜的光圈以保持微透镜阵列中各个微透镜下形成的图像的比率以及因而尺寸。在一些应用中,诸如正方形光圈的不同主透镜光圈形状用于实现期望的(例如有效的)微透镜阵列下光传感器表面上的图像阵列包。Referring again to FIG. 1 , the aperture size (eg, the effective size of the opening in the lens) of main lens 110 and microlenses in microlens array 120 is also selected to suit the particular application in which imaging arrangement 100 is implemented. In many applications, the relative aperture size is chosen such that the captured images are as large as possible without overlapping (ie, so that light does not undesirably overlap adjacent light sensors). This method works by matching the f-stops of the main lens and the microlens (focal ratio, ie the ratio of the f-stop of the lens to the effective focal length). In this case, the effective focal length of the main lens 110 expressed in f-stop is a ratio between the diameter of the main lens aperture and the distance “s” between the main lens 110 and the microlens array 120 . In applications where the principal plane of master lens 110 is translated relative to the plane in which microlens array 120 resides, the aperture of the master lens is selectively altered to preserve the ratio and thus size of the image formed under each microlens in the microlens array. In some applications, different main lens aperture shapes, such as a square aperture, are used to achieve the desired (eg, effective) image array packet on the light sensor surface under the microlens array.
以下讨论与本发明的一个和多个示例实施方式相关地涉及图1的成像装置100的普通应用。考虑成像装置100内的两平面(two-plane)光场“L”,L(u,v,s,t)表示沿与主透镜110在(u,v)相交、且与微透镜阵列12的平面在(s,t)相交的光线传播的光。假设微透镜阵列120中的理想微透镜和光传感器阵列130中对齐网格上的理想光传感器(例如像素),传输到光传感器的所有光也传输通过微透镜阵列120中的其正方形母微透镜,并传输通过主透镜110上光传感器的共轭正方形。主透镜110和微透镜上的这两个正方形区域指定光场中小的四维盒,且光传感器测量由该盒表示的光线集合中光的总量。相应地,各个光传感器检测光场中的这种四维盒,因此由光传感器阵列130检测的光场是L(u,v,s,t)的盒过滤、直线采样。The following discussion relates to general applications of the imaging device 100 of FIG. 1 in relation to one or more example embodiments of the present invention. Consider the two-plane (two-plane) light field "L" in the imaging device 100, L(u, v, s, t) represents the intersection with the main lens 110 at (u, v) and the microlens array 12 Light propagated by a ray that intersects the plane at (s, t). Assuming ideal microlenses in microlens array 120 and ideal photosensors (e.g., pixels) on an aligned grid in photosensor array 130, all light transmitted to the photosensors is also transmitted through its square mother microlens in microlens array 120, and transmitted through the conjugate square of the photosensor on the main lens 110 . These two square areas on the main lens 110 and the microlens designate a small four-dimensional box in the light field, and the light sensor measures the total amount of light in the collection of rays represented by this box. Accordingly, each photosensor detects such a four-dimensional box in the light field, so the light field detected by the photosensor array 130 is a box-filtered, linear sampling of L(u, v, s, t).
图11示出根据本发明另一示例实施方式的将传感器像素位置映射到相对于所采集数据的L(u,v,s,t)空间中光线的方法。图11所示的和本文讨论的方法可以例如用于图1,其中光传感器阵列130中的各个光传感器对应于传感器像素。右下角的图像1170是从光线传感器(光传感器)1130读取的原始数据的下采样(downsample),并具有在一圆形微透镜下形成的被圈出的图像1150。左下角的图像1180是围绕被圈出微透镜图像1150的原始数据部分的放大表示,其中一个光传感器值1140在微透镜内圈出。由于该圆形图像1150是透镜光圈的图像,因此盘内所选像素的位置提供主透镜上所示光线110起始位置的(u,v)坐标。传感器图像1170内微透镜图像1150的位置提供光线1120的(x,y)坐标。FIG. 11 illustrates a method of mapping sensor pixel positions to light rays in L(u, v, s, t) space with respect to acquired data, according to another example embodiment of the present invention. The approach shown in FIG. 11 and discussed herein may be used, for example, in FIG. 1 , where each photosensor in photosensor array 130 corresponds to a sensor pixel. The bottom
虽然相对于附图(以及其它示例实施方式)讨论了传感器元件到光线的映射,但与各个传感器元件相关联的值选择性地由传输通过光学部件到达各个特定传感器元件的光线集合的值来表示。因此在图1的环境中,光传感器阵列中的各个光传感器可被实现,以提供表示传输通过主透镜110和微透镜阵列120到达光传感器的光线集合的值。即,各个光传感器响应于入射到光传感器上的光产生输出,且各个光传感器相对于微透镜阵列120的位置用于提供关于入射光的方向信息。While the mapping of sensor elements to light rays is discussed with respect to the figures (and other example embodiments), the values associated with each sensor element are selectively represented by the value of the set of light rays transmitted through the optics to each particular sensor element . Thus in the context of FIG. 1 , individual photosensors in the photosensor array may be implemented to provide a value representative of the set of light rays transmitted through main lens 110 and microlens array 120 to the photosensor. That is, each photosensor generates an output in response to light incident on the photosensor, and the position of each photosensor relative to the microlens array 120 is used to provide directional information about the incident light.
在一示例实施方式中,微透镜阵列120的分辨率被选择成匹配特定应用中最终图像的期望分辨率。光传感器阵列130的分辨率被选择成使各个微透镜按需覆盖尽可能多的光传感器,以匹配该应用的期望方向分辨率,或者可实现的光传感器的最好分辨率。这样,考虑到诸如成像类型、成本、复杂性和用于达到特定分辨率的可用设备,成像系统100(以及本文讨论的其它系统)的分辨率选择性地调整成适于特定应用。In an example embodiment, the resolution of the microlens array 120 is selected to match the desired resolution of the final image in a particular application. The resolution of the photosensor array 130 is chosen such that each microlens covers as many photosensors as necessary to match the desired directional resolution for the application, or the best achievable resolution of the photosensors. As such, the resolution of imaging system 100 (as well as other systems discussed herein) is selectively tuned for a particular application, taking into account factors such as imaging type, cost, complexity, and available equipment for achieving a particular resolution.
一旦图像数据经由光学部件和传感器(例如使用图1中的成像装置190)捕获,就可实现多种计算功能和装置以选择性地处理图像数据。在本发明的一示例实施方式中,不同的光传感器集合捕捉来自各个微透镜的分开光线、并将关于所捕获光线的信息传送到诸如处理器的计算部件。场景的图像根据所测量的光线集合计算。Once image data is captured via optics and sensors (eg, using imaging device 190 in FIG. 1 ), various computing functions and devices can be implemented to selectively process the image data. In an example embodiment of the invention, different sets of light sensors capture separate light rays from individual microlenses and communicate information about the captured light rays to a computing component such as a processor. An image of the scene is computed from the set of rays measured.
在图1的环境中,实现传感器数据处理电路140以处理图像数据并计算包括目标105、106和107的场景图像。在一些应用中,也实现预览装置150以用预览处理器160产生预览图像,其中预览图像显示在预览屏幕170上。预览处理器160选择性地通过传感器数据处理电路140实现,其中以例如与以下讨论的方法一致的方式产生预览图像。In the context of FIG. 1 , sensor data processing circuitry 140 is implemented to process image data and compute an image of the scene including objects 105 , 106 and 107 . In some applications, preview device 150 is also implemented to generate a preview image with preview processor 160 , where the preview image is displayed on preview screen 170 . Preview processor 160 is optionally implemented by sensor data processing circuitry 140 , wherein preview images are generated, for example, in a manner consistent with the methods discussed below.
在另一实施方式中,对于从传感器装置输出的图像中的各个像素,计算部件对测量光束的子集进行加权并求和。此外,计算部件可对例如使用图像合成方法以上述方式计算的图像集合进行分析和组合。虽然本发明不必限于这种应用,但本发明的各个方面可通过对这种计算部件的若干具体示例实施方式的讨论来理解。In another embodiment, for each pixel in the image output from the sensor arrangement, the calculation means weights and sums a subset of the measurement beams. Furthermore, the calculation means may analyze and combine the image collection calculated in the above-described manner, for example using an image composition method. While the invention is not necessarily limited to this application, aspects of the invention can be understood through a discussion of several specific example implementations of such computing components.
与各个示例实施方式相关,图像数据处理涉及再聚焦正在捕捉的图像的至少一部分。在一些实施方式中,输出图像根据聚焦于特定场景的期望元素上的照片产生。在一些实施方式中,经计算的照片被聚焦于全域(场景)的特定期望深度,且散焦模糊与常规相片一样随着远离期望深度而增加。不同焦深可选择成对场景中不同目标聚焦。In connection with various example embodiments, image data processing involves refocusing at least a portion of an image being captured. In some implementations, the output image is generated from a photograph focusing on desired elements of a particular scene. In some implementations, the computed photo is focused at a particular desired depth of the sphere (scene), and the defocus blur increases away from the desired depth like a regular photo. Different depths of focus can be selected to focus on different targets in paired scenes.
图12示出再聚焦在不同深度、从根据本发明另一示例实施方式测量的单个光场计算的若干图像1200-1240。图12所示的方法可例如使用诸如如图1所示的成像装置实现。Figure 12 shows several images 1200-1240 refocused at different depths, calculated from a single light field measured according to another example embodiment of the invention. The method shown in FIG. 12 may be implemented, for example, using an imaging device such as that shown in FIG. 1 .
图13A和13B根据本发明另一示例实施方式示出再聚焦方法。该方法可例如通过诸如图1中传感器数据处理电路140的成像系统的计算/处理部件来实现。来自成像装置的各个输出像素(例如1301)对应于虚胶片平面1310上的三维(3D)点(例如1302)。该虚胶片平面1310位于主透镜1330之后,其中该平面1310与全域(未示出)中的期望焦平面光学共轭。即,虚胶片平面1310位于这样的位置:胶片平面期望定位在此位置以捕捉简单二维(2D)图像(例如该位置可与照片胶片对常规相机定位以捕获2D图像的位置相比)。通过按方向分离光(例如使用图1的微透镜阵列120),可选择性地计算到达虚胶片平面1310的光。这样,输出像素1301的值可通过对汇聚在相应3D点1302上的光线锥1320求和来计算。这些光线的值可根据由光线传感器1350采集的数据收集。为了查看简单,图13A示出2D的成像配置。在图13B中,在更靠近主透镜时使用所选全域焦深,为同一像素1301对来自3D点1340的光线锥1330求和。13A and 13B illustrate a refocusing method according to another example embodiment of the present invention. The method may be implemented, for example, by computing/processing components of an imaging system such as sensor data processing circuit 140 in FIG. 1 . Each output pixel (eg, 1301 ) from the imaging device corresponds to a three-dimensional (3D) point (eg, 1302 ) on a virtual film plane 1310 . The virtual film plane 1310 is located behind the main lens 1330, where the plane 1310 is optically conjugate to the desired focal plane in the sphere (not shown). That is, the virtual film plane 1310 is located at a location where the film plane is expected to be positioned to capture a simple two-dimensional (2D) image (comparable, for example, to where photo film is positioned for a conventional camera to capture a 2D image). By separating light by direction (eg, using microlens array 120 of FIG. 1 ), light reaching virtual film plane 1310 can be selectively counted. In this way, the value of the
在一些实施方式中,所需光线值不与由光线传感器捕获的离散样本位置精确对应。在一些实施方式中,光线值被估算为所选紧密样本位置的加权和。在一些实现中,该加权方法对应于根据离散传感器样本重建连续四维光场的四维滤波器核(filter kernel)。在一些实现中,该四维滤波器通过对应于四维空间中16个最邻近样本的四线性插值的四维帐篷函数(tent function)来实现。In some implementations, the desired light values do not correspond exactly to the discrete sample locations captured by the light sensors. In some implementations, ray values are estimated as a weighted sum of selected tight sample locations. In some implementations, the weighting method corresponds to a four-dimensional filter kernel that reconstructs a continuous four-dimensional light field from discrete sensor samples. In some implementations, the 4D filter is implemented by a 4D tent function corresponding to a quadrilinear interpolation of the 16 nearest neighbor samples in 4D space.
图35是根据本发明另一示例实施方式用于根据接收到的光线传感器数据计算再聚焦图像的流程图。在框3520,从光线传感器数据3510提取子光圈图像集,其中各个子光圈图像由像素在其微透镜下相同相对位置处的各个微透镜图像下的单个像素组成。在框3530,组合子光圈图像集以生成最终的输出图像。可任选地将子光圈图像彼此相对地平移并合成以使期望平面聚焦。35 is a flowchart for calculating a refocus image from received light sensor data according to another example embodiment of the present invention. At block 3520 , a set of sub-aperture images is extracted from the light sensor data 3510 , where each sub-aperture image consists of a single pixel under each microlens image of the pixel at the same relative position under its microlens. At block 3530, the sub-aperture image sets are combined to generate the final output image. The sub-aperture images can optionally be translated relative to each other and composited to bring the desired plane into focus.
在另一示例实施方式中,与输出图像边缘附近的像素相关联的暗化得以减轻。例如,对于输出图像边缘附近的像素,一些所需光线未被捕获在测量光场中(它们可超出诸如图1中微透镜阵列120和光传感器阵列130的成像装置的空间或方向边界)。对于该暗化是不期望的应用中,像素值通过将与像素关联的值(例如由光传感器阵列捕获的)除以实际上在测量光场中找到的光的部分来归一化。In another example implementation, darkening associated with pixels near the edges of the output image is mitigated. For example, for pixels near the edges of the output image, some desired light rays are not captured in the measurement light field (they may be beyond the spatial or directional boundaries of an imaging device such as microlens array 120 and photosensor array 130 in FIG. 1 ). For applications where this dimming is undesirable, pixel values are normalized by dividing the value associated with the pixel (eg captured by the light sensor array) by the fraction of light actually found in the measured light field.
如上所述,对不同的应用选择各种不同计算方法。以下讨论阐述各种这样的方法。在一些应用中,对附图进行参照,且在其它应用中,对诸方法进行一般地描述。在各个这些应用中,特定方法可使用诸如图1所示的传感器数据处理电路140的计算型部件实现。As mentioned above, various calculation methods are selected for different applications. The following discussion sets forth a variety of such approaches. In some applications, reference is made to the figures, and in other applications, methods are generally described. In each of these applications, certain methods may be implemented using computational components such as sensor data processing circuit 140 shown in FIG. 1 .
在另一示例实施方式中,特定成像系统的各个输出像素的成像方法对应于虚相机模型,其中虚胶片被任意和/或选择性地转动或变形、并且虚主透镜光圈按需相应地移动并改变其大小。作为示例,图25示出一示例实施方式,其中如果虚像已在被允许与物理主透镜平面2510不一致的虚透镜平面2530上任意大小的虚透镜光圈2520之后出现,则如同已出现在虚胶片2560上一样地计算虚像。通过对传输通过虚光圈2520并汇聚在点2550、根据其相交点和在光线传感器2570上的入射方向指定的光线求和来计算对应于虚胶片2560上点2550的像素值。In another example embodiment, the imaging method of each output pixel of a particular imaging system corresponds to a virtual camera model, where the virtual film is arbitrarily and/or selectively rotated or deformed, and the virtual main lens aperture is moved accordingly and as needed. change its size. As an example, FIG. 25 shows an example embodiment in which if a virtual image has appeared behind a
图26示出根据本发明另一示例实施方式的操纵虚透镜平面的方法。虚透镜平面2630和/或虚胶片平面2660选择性地相对物理主透镜或其它基准倾斜。使用该方法计算的图像具有与成像平面不平行的所得全域焦平面。FIG. 26 illustrates a method of manipulating a virtual lens plane according to another example embodiment of the present invention.
在另一示例实施方式中,如图27中所例示,虚胶片2560无需为平面,而可以采用任何形状。In another example embodiment, as illustrated in Figure 27, the
各种方法涉及不同光圈的选择性实现。在一示例实施方式中,虚透镜平面上的虚光圈通常是圆孔,而在其它示例实施方式中,虚光圈通常是非圆形的和/或实现为具有任何形状的多个不同区域。在这些或其它实施方式中,“虚光圈”的概念可被一般化,而在一些应用中,对应于涉及光数据处理以对应于可通过所选“虚”光圈接收的光的方法。Various approaches involve the selective realization of different apertures. In one example embodiment, the virtual aperture in the plane of the virtual lens is generally a circular aperture, while in other example embodiments the virtual aperture is generally non-circular and/or implemented as multiple distinct regions of any shape. In these or other embodiments, the concept of a "virtual aperture" may be generalized and, in some applications, correspond to a method involving light data processing to correspond to light receivable through a selected "virtual" aperture.
在各个实施方式中,虚光圈方法通过虚透镜平面上预定却任意的函数实现。图31A-31D示出结合一个或多个示例实施方式选择性地实现为虚光圈函数的不同标量函数。各个函数包括例如平滑地变化值(如图31B例示)、实现多个不同区域(如图31A例示)、以及采用负值(如图31D例示)。为了计算虚胶片上点的值,从虚透镜上不同点出发并汇聚在虚胶片上的点的所有光线通过虚光圈函数加权并求和。在各个其它实施方式中,最终值通过取决于光线值的任意计算函数来计算。例如,计算函数可不对应于通过虚光圈函数的加权,但可包含取决于对光线值计算的测试函数值的不连续程序分支。In various embodiments, the virtual aperture method is realized by a predetermined but arbitrary function on the virtual lens plane. 31A-31D illustrate different scalar functions selectively implemented as virtual aperture functions in connection with one or more example embodiments. Various functions include, for example, varying values smoothly (as exemplified in FIG. 31B ), implementing multiple different regions (as exemplified in FIG. 31A ), and taking negative values (as exemplified in FIG. 31D ). To calculate the value of a point on the virtual film, all rays originating from different points on the virtual lens and converging on a point on the virtual film are weighted and summed by the virtual aperture function. In various other embodiments, the final value is calculated by any calculation function that depends on the ray value. For example, the computation function may not correspond to the weighting through the virtual aperture function, but may contain discrete program branches depending on the value of the test function computed on the ray values.
在其它示例实施方式中,由于可与本文所述的其它实施方式相结合地实现,因此可独立地选择计算输出像素的方法。例如,在一示例实施方式中,包括虚透镜平面的取向和虚光圈尺寸的参数对各个输出像素连续地变化。在另一示例中,如图32所示,用于积分各个输出像素的光线的虚光圈函数逐个像素地变化。在输出图像3200中,像素3201使用虚光圈函数3210而像素3251使用虚光圈函数3250。In other example embodiments, the method of computing output pixels may be chosen independently, as may be implemented in conjunction with other embodiments described herein. For example, in an example embodiment, parameters including the orientation of the virtual lens plane and the size of the virtual aperture are varied continuously for each output pixel. In another example, as shown in FIG. 32 , the virtual aperture function used to integrate the rays of each output pixel varies on a pixel-by-pixel basis. In output image 3200 , pixel 3201 uses virtual aperture function 3210 and pixel 3251 uses virtual aperture function 3250 .
在另一示例实施方式中,虚光圈函数逐个像素地变化。在一具体实施例中,将该函数选择成遮挡掉来自特定场景不期望部分的光线,诸如前景中的不期望目标。In another example embodiment, the virtual aperture function varies on a pixel-by-pixel basis. In a particular embodiment, the function is chosen to block out light from undesired parts of a particular scene, such as undesired objects in the foreground.
在又一示例实施方式中,用户交互地选择虚光圈参数,且根据该选择处理光数据。图33是示出一个这样的示例实施方式的过程流程图。在第一框3310中,该过程从光线传感器接收数据。在框3320中,用户选择输出图像的区域;在框3330中,用户选择图像形成方法;而在框3340中,用户更改所选方法的参数、并在视觉上检验在框3350计算的场景图像(例如在计算机监视器上)。框3360检查用户是否对图像部分完成了编辑以及是否返回到框3330。框3370检查用户是否对待编辑的图像部分完成了选择以及是否返回到框3320。如果编辑完成,则框3380保存最终的编辑图像。In yet another example embodiment, a user interactively selects a virtual aperture parameter, and the light data is processed according to the selection. Figure 33 is a process flow diagram illustrating one such example implementation. In a
在另一示例实施方式中,通过同时对一个以上目标聚焦来计算具有延伸景深的图像。在一实现中,输出图像的景深通过用缩小光圈(尺寸减小)的主透镜光圈的常规图形成像而得到延伸。对于各个输出像素,使用通过比在光线感测中使用的光圈小的光圈(在虚透镜平面上)会聚在输出像素上的光线进行求值。In another example embodiment, an image with an extended depth of field is computed by simultaneously focusing on more than one object. In one implementation, the depth of field of the output image is extended by conventional pattern imaging with a down-stopped (reduced in size) main lens aperture. For each output pixel, the evaluation is performed using rays that converge on the output pixel through a smaller aperture (on the virtual lens plane) than that used in light sensing.
在涉及图1所示的示例系统100的一实现中,景深通过提取各个微透镜图像下的光传感器值而得到延伸,其中各个光传感器置于各个微透镜图像内相同的相对位置。对于图1,景深的延伸产生了一图像,其中不仅目标105在焦点上(由于距离“d”和“s”之间的相关性)而且可能由于散焦而模糊的诸如对象106和107的在不同深度处的物体也在焦点上。与对所得图像任选地下采样相结合的延伸景深的方法是计算有效的。该方法在其中随图像产生的噪声可以忍受的应用中被选择性地实现,例如其中所产生的图像是为了预览目的(例如用于在图1的预览屏幕170上显示)。以下讨论的图4进一步涉及产生预览图像的方法。In one implementation involving the example system 100 shown in FIG. 1 , the depth of field is extended by extracting light sensor values under each lenticule image where each light sensor is placed at the same relative position within each lenticule image. 1, the extension of the depth of field produces an image in which not only the target 105 is in focus (due to the correlation between distances "d" and "s") but also objects such as objects 106 and 107 that may be blurred due to defocus Objects at different depths are also in focus. The method of extending the depth of field combined with optional subsampling of the resulting image is computationally efficient. The method is selectively implemented in applications where noise generated with the image is tolerable, for example where the image is generated for preview purposes (eg for display on preview screen 170 of FIG. 1 ). Figure 4, discussed below, further relates to the method of generating the preview image.
图14A和14B示出结合一个或多个示例实施方式计算具有不同景深的图像的方法。图14A示出通过再聚焦计算的图像和特写。注意到特写中的脸由于较浅的景深而模糊。图14B的中间行示出用诸如上一段落所述的延伸景深方法计算的最终图像。14A and 14B illustrate a method of computing images with different depths of field in conjunction with one or more example embodiments. Figure 14A shows images and close-ups calculated by refocusing. Notice that the face in the close-up is blurred due to the shallow depth of field. The middle row of Figure 14B shows the final image computed with an extended depth-of-field method such as that described in the previous paragraph.
图8是根据本发明另一示例实施方式的延伸图像中景深的另一计算方法的流程图。在框810中,再聚焦在特定场景所有焦深的一图像集被再次聚焦。在框820,对于各个像素,根据具有最高局域对比度的一图像集确定一个像素。在框830,具有最高局域对比度的像素被组成一最终虚像。使用该方法,使用相对大量的像素(例如相对于为微透镜阵列中的各个微透镜选择单个像素(光传感器))可获得期望的信噪比(SNR)。参照图14C,所示的示例图像是使用与结合图8所述的相类似的方法产生的,并呈现相对较低的图像噪声。FIG. 8 is a flowchart of another calculation method for extending the depth of field in an image according to another exemplary embodiment of the present invention. In
在一可选实施例中,按照各个再聚焦图像的虚胶片平面与图像的光传输到虚胶片平面所经由的主透镜主平面之间的距离,待计算的再聚焦图像的最小集合定义如下。最小距离设定在主透镜的焦距,而最大距离设定在场景中最近目标的共轭深度。各个虚胶片平面之间的间距不大于Δxmf/A,其中Δxm是微透镜的宽度,f是主透镜和微透镜阵列之间的间距,而A是透镜光圈的宽度。In an alternative embodiment, according to the distance between the virtual film plane of each refocused image and the principal plane of the main lens through which light of the image travels to the virtual film plane, the minimum set of refocused images to be calculated is defined as follows. The minimum distance is set at the focal length of the main lens, while the maximum distance is set at the conjugate depth of the closest object in the scene. The spacing between the individual virtual film planes is no greater than Δx m f/A, where Δx m is the width of the microlens, f is the spacing between the main lens and the microlens array, and A is the width of the lens aperture.
在另一示例实施方式中,多个再聚焦图像被组合,以在各个最终像素处产生延伸景深图像来保留在再聚焦图像集的任一个中最佳聚焦的像素。在另一实施方式中,待保留的像素通过增强与邻近像素的局域对比度和相干性来进行选择。对于关于成像的一般信息以及关于涉及增强局域对比度的成像方法的具体信息,可以参考ACM Transaction on Graphics的2004年第23卷3号292-300页,A.Agarwala、M.Dontcheva、M.Agrawala、S.Drucker、A.Colburn、B.Curless、D.Salesin、M.Cohen的“Interactive Digital Photomontage(交互数字图片集锦)”,该文章通过应用整体结合于此。In another example embodiment, multiple refocused images are combined to produce an extended depth-of-field image at each final pixel to preserve the best focused pixel in any one of the refocused image sets. In another embodiment, pixels to be retained are selected by enhancing local contrast and coherence with neighboring pixels. For general information on imaging and specific information on imaging methods involving enhanced local contrast, reference may be made to ACM Transactions on Graphics, Vol. 23, No. 3, 2004, pp. 292-300, A. Agarwala, M. Dontcheva, M. Agrawala , S. Drucker, A. Colburn, B. Curless, D. Salesin, M. Cohen, "Interactive Digital Photomontage," which is hereby incorporated by application in its entirety.
在本发明的另一示例实施方式中,延伸景深图像如下进行计算。对于各个输出图像像素,再聚焦计算在像素处进行以在不同深度聚焦。在各个深度,计算汇聚光线均一性的度量。选择产生(相对)最大均一性的深度并对该像素值保持该深度。使用该方法,在图像像素焦点对准时,所有其光线源自场景中的同一点,并因此有可能具有类似的色彩和强度。In another example embodiment of the invention, the extended depth-of-field image is calculated as follows. For each output image pixel, refocus calculations are performed at the pixel to focus at different depths. At each depth, a measure of the uniformity of the collected rays is computed. The depth that yields the (relatively) greatest uniformity is chosen and maintained for that pixel value. Using this method, when an image pixel is in focus, all its light rays originate from the same point in the scene, and thus are likely to be of similar color and intensity.
虽然可以用不同方法来定义均一性度量,但是对于许多应用,使用如下均一性度量:对于各个光线的各个色彩分量,根据中心光束(以最靠近主透镜光轴的角度到达像素的光线)的相应色彩分量计算色彩强度的方差。对所有这些方差求和,并将均一性取为该和的倒数。Although the uniformity metric can be defined in different ways, for many applications the following uniformity metric is used: for each color component of each ray, according to the corresponding The color component computes the variance of the color intensity. Sum all these variances and take the uniformity as the inverse of this sum.
图34是根据本发明另一示例实施方式的延伸图像中景深的流程图。在框3410,在待计算的虚像中选择像素。在框3420,将该像素再聚焦在多个焦距,并计算组合成再聚焦在各个深度的光线的均一性。在框3430,与被组合光线的最大均一性相关联的再聚焦像素值被保留为最终输出图像像素值。该过程在框3440继续,直到所有像素被处理。FIG. 34 is a flowchart of extending the depth of field in an image according to another example embodiment of the present invention. At block 3410, pixels are selected in the virtual image to be computed. At block 3420, the pixel is refocused at multiple focal lengths, and the uniformity of the combined rays refocused at each depth is calculated. At block 3430, the refocused pixel value associated with the greatest uniformity of the combined rays is retained as the final output image pixel value. The process continues at block 3440 until all pixels are processed.
在另一示例实施方式中,上述过程被调整成使最终像素值的选择考虑到相邻像素值、以及相组合来计算这些像素值的关联光线的均一性。In another example embodiment, the above-described process is tuned such that the selection of the final pixel value takes into account the uniformity of neighboring pixel values, and associated rays that are combined to compute those pixel values.
在本发明又一示例实施方式中,景深通过将各个像素聚焦在该方向上最接近物体的深度而得到延伸。图9是根据一个这种示例实施方式的用于延伸图像中景深的流程图。在框910,选择待计算的最终虚像中的像素。在框920,对于从所选像素通过透镜中心传输进入场景的光线(或光线集),估算最近物体的深度。In yet another example embodiment of the invention, the depth of field is extended by focusing each pixel at the depth of the closest object in that direction. Figure 9 is a flowchart for extending depth of field in an image, according to one such example implementation. At
在框930,在再聚焦于估算深度的图像中计算所选像素的值。如果在框940附加像素得到了预期地处理,则在框910选择另一像素,且该过程在框920对新选择的像素继续。当没有附加像素在框940得到预期地处理,则各个所选像素的计算值用于建立最终虚像。At
在一些涉及景深延伸的实施方式中,通过不考虑源自比期望物体的深度更近的深度的光线以减轻或消除围绕更靠近透镜的物体边缘的例如通常称为“敷霜(blooming)”或“光晕”的伪像(artifact)。作为示例,图23示出传输通过感兴趣目标2310上全域的期望焦点2301的光线集合。这些光线中的一部分被物体2320从主透镜阻断,且这些光线对应于由光线传感器2350检测到的、但是在计算点2301的图像值中不予考虑的光线2340。在一些实施方式中,得到的像素值通过除以不予考虑的一部分光线而归一化。这些实施方式可单独地或者彼此结合地使用,以及与包括涉及延伸景深的任何其它实施方式相结合使用。In some embodiments involving depth of field extension, light rays originating from depths closer than the desired object depth are mitigated or eliminated by disregarding light rays originating at depths closer to the lens, such as commonly referred to as "blooming" or "Halo" artifacts. As an example, FIG. 23 shows a set of rays traveling through a desired
如上所述,根据各个示例实施方式来处理光数据以聚焦和/或校正图像。涉及后校正方法的各种方法如下所述。在这些实施方式中的一些中,通过追踪通过在捕获光线时使用的光学部件的实际光学元件(例如透镜或透镜组)的光线,并将所追踪的光线映射到捕获该光的特定光传感器来校正像差。使用由光学部件展现的已知缺陷以及检测该光的传感器的已知位置,光数据得以重新排列。As described above, light data is processed to focus and/or correct images according to various example embodiments. Various methods involving post-correction methods are described below. In some of these implementations, the ray is obtained by tracing the ray through the actual optical element (such as a lens or lens group) of the optics used in capturing the ray, and mapping the traced ray to the specific light sensor that captures the light. Corrects aberrations. The light data is rearranged using known imperfections exhibited by the optical components and the known positions of the sensors that detect the light.
在一校正型实施方式中,对于合成相片的胶片上的各个像素计算贡献于通过理想化光学部件形成的各个像素的光线全域。在一实现中,通过追踪从虚胶片位置通过理想光学部件回到全域的光线来计算这些光线。图15示出结合一个这样的示例实施方式追踪从虚胶片平面1510上的3D点1501通过理想薄主透镜1520到全域光线锥1530的光线的方法。在一些实现中,期望光线集合1525可不必对应于通过实际透镜的方向,但是可对应于要被加权并求和来产生期望图像值的任何光线集合。In a corrective implementation, the gamut of rays contributing to each pixel formed by the idealized optics is calculated for each pixel on the film of the composite print. In one implementation, these rays are computed by tracing rays from the virtual film position through ideal optics back to the universe. FIG. 15 illustrates a method of tracing rays from a
图16示出结合另一示例实施方式的对于特定应用得到沿理想光线传播的光的值的方法。这些值通过追踪透过实际主透镜1650的期望理想全域光线1630计算,该主透镜是具有球形界面的单个元件并用于在测量(检测)该光线时将实际全域光线物理地导向光线传感器1640。在该实施方式中,理想汇聚在单个3D点(1601)的光线并不汇聚,这表示具有球形界面的透镜的称为球形像差的缺陷。光线传感器1640提供像差光线(诸如1651)每一个的用于校正球形像差的各个值。FIG. 16 illustrates a method of obtaining values for light traveling along ideal rays for a particular application in conjunction with another example embodiment. These values are calculated by tracing the desired ideal global ray 1630 through the actual main lens 1650, which is a single element with a spherical interface and used to physically direct the actual global ray to the light sensor 1640 when measuring (detecting) this ray. In this embodiment, rays that would ideally converge at a single 3D point (1601) do not converge, which represents a defect of lenses with spherical interfaces called spherical aberration. Ray sensor 1640 provides individual values for each of the aberrated rays (such as 1651 ) for correcting spherical aberration.
图17A-17C示出使用通过透镜校正方法的计算机模拟的示例结果。图17A中的图像是理想512x512相片(通过理想光学部件可见的)。图17B中的图像是使用实际f/2双凸球透镜产生的图像,它具有对比度损失和模糊。图17C中的图像是通过使用便于在各个512x512微透镜处10x10方向(u,v)分辨率的光学部件和传感器装置,使用上述图像校正方法计算的相片。17A-17C show example results of computer simulations using the through-the-lens correction method. The image in Figure 17A is an ideal 512x512 photograph (visible through ideal optics). The image in Figure 17B is an image produced using an actual f/2 bi-convex spherical lens, which suffers from loss of contrast and blurring. The image in Figure 17C is a photograph computed using the image correction method described above, using optics and sensor arrangements that facilitate 10x10 directional (u,v) resolution at each 512x512 microlens.
在本发明的另一示例实施方式中,校正用于捕获图像的主透镜的色差。色差是由光线在物理导向通过光学部件时由于取决于光波长在物理方向上的差异而引起的发散所导致的。考虑发生在实际光学部件中的波长相关的光折射率,入射光线通过实际光学部件来追踪。在一些应用中,根据主波长分别追踪系统的各个色彩分量。In another example embodiment of the present invention, chromatic aberration of a main lens used to capture an image is corrected. Chromatic aberration is caused by the divergence of light rays as they are physically guided through an optical component due to differences in physical orientation depending on the wavelength of the light. Consider the wavelength-dependent refractive index of light that occurs in real optical components through which incident rays are traced. In some applications, the individual color components of the system are tracked separately according to the dominant wavelength.
在另一示例实施方式中,如图18A所示,分别追踪彩色成像系统中共存的各个红色、绿色和蓝色分量。绿色全域光线通过计算追踪返回成像系统以产生绿色光线1830,并确定它们在何处与彩色光线传感器1810相交、以及它们在什么方向与彩色光线传感器1810相交。类似地,图18B示出通过计算追踪期望蓝色全域光线1820,相比绿色光线这些光线被折射到更大的范围。图18C示出通过计算追踪期望红色全域光线1830,相比绿色光线这些光线被折射到较小的范围。使用例如结合本文中所述的其它示例实施方式描述的方法根据来自光线传感器1810的值计算各个光线的值。对各个光线的光场值进行积分以计算各个特定胶片像素的校正图像值。对于一些应用,色差通过将各个色彩通道聚焦在其波长最佳聚焦的平面上而得到改进。In another example embodiment, as shown in Figure 18A, the individual red, green and blue components that coexist in a color imaging system are tracked separately. The green sphere rays are computationally traced back to the imaging system to generate
期望光线可能未精确地在由光线传感器采样的离散光线值之一上收敛。在一些实施方式中,用于这些光线的值被计算为离散光线值的函数。在一些实施方式中,该函数对应于期望光线领域中的离散光线值的加权和。在一些实施方式中,该加权和对应于用预定的卷积核心(kernel)函数的离散采样值的4D卷积。在其它实施方式中,加权可对应于根据16个最近邻进行的四线性插值。在另外的实施方式中,加权可对应于从16个最近邻进行的三次或双三次插值。The desired ray may not converge exactly on one of the discrete ray values sampled by the light sensor. In some implementations, the values for these rays are calculated as a function of discrete ray values. In some embodiments, the function corresponds to a weighted sum of discrete ray values in the desired ray domain. In some embodiments, the weighted sum corresponds to a 4D convolution of discrete sampled values with a predetermined convolution kernel function. In other embodiments, the weighting may correspond to quad-linear interpolation based on 16 nearest neighbors. In further embodiments, the weighting may correspond to cubic or bicubic interpolation from the 16 nearest neighbors.
值得注意的是为了概念简单已根据光线追踪描述了示例校正过程;各种其它方法也可用校正实现。在一实施方式中,对于各个期望输出像素,预先计算做出贡献的光传感器值集及其相对权重。如上所述,这些权重是包括光学部件、传感器、要对各个输出像素加权并求和的期望光线集合以及期望光场重建滤波器的许多因素的属性。这些权重被选择性地使用光线追踪预先计算,并存储。经校正的图像通过加权并累加各个输出像素的适当感测光场值来形成。It is worth noting that the example correction process has been described in terms of ray tracing for conceptual simplicity; various other methods can also be implemented with correction. In one embodiment, for each desired output pixel, the set of contributing light sensor values and their relative weights are precomputed. As mentioned above, these weights are properties of many factors including the optics, the sensor, the desired set of rays to be weighted and summed for each output pixel, and the desired light field reconstruction filter. These weights are optionally precomputed using ray tracing, and stored. A corrected image is formed by weighting and summing the appropriate sensed light field values for each output pixel.
图29和图30示出结合上述校正方法使用的其它示例实施方式。图29是用于预先计算与光线(光)传感器相关联的权重数据库和与各个光线传感器相关联的输出像素值的过程流程图。在前两个框2910和2920中,对于期望图像形成过程,接收由要求和以产生输出图像像素值的理想全域光线集合组成(对于各个输出图像像素)的数据集(例如数据库中),并接收用于将光线物理地传导到光线传感器的实际主透镜光学部件的规范。在框2925,选择一图像像素。对于该像素的输出值,在框2930用计算方法通过主透镜光学部件的虚拟表示到光线传感器地追踪全域光线关联集合。这导致施加于各个光线传感器值以计算输出像素值的权重集。在框2940中这些值存储在输出数据库中。框2950检查是否已经处理了所有像素,如果不是则返回框2925。如果已处理了所有像素,则最终框2960保护完成的数据库。Figures 29 and 30 illustrate other example embodiments used in conjunction with the calibration methods described above. 29 is a process flow diagram for precomputing a database of weights associated with light (light) sensors and output pixel values associated with each light sensor. In the first two blocks 2910 and 2920, for a desired image formation process, a data set (e.g., in a database) is received (for example, in a database) consisting of the ideal global set of rays to be summed to produce an output image pixel value (for each output image pixel), and A specification for the actual main lens optics used to physically conduct light to the light sensor. At block 2925, an image pixel is selected. For the output value of the pixel, at block 2930 the global ray correlation set is computationally traced through the virtual representation of the master lens optics to the ray sensor. This results in a set of weights applied to the individual light sensor values to compute the output pixel value. These values are stored in the output database in block 2940. Block 2950 checks to see if all pixels have been processed, and returns to block 2925 if not. If all pixels have been processed, final block 2960 secures the completed database.
图30是使用通过如图29中的过程计算的权重数据库计算输出图像的过程的流程图。在框3010和3020中,该过程接收数据库并通过计算该数据库时使用的主透镜光学部件捕获的光线传感器值的集合。在框3025,选择输出图像中的像素,使得其最终图像值可进行计算。对于所选像素,框3030使用数据库来寻找作出贡献的光线传感器集合及其权重。在框3040,对于该图像像素值,各个在3020中给出的传感器值被加权并求和。在框3050,进行检查以查看是否已处理了所有图像像素。如果不是,则该过程返回框3025,如果是则在框3060保存输出图像。FIG. 30 is a flowchart of a process of calculating an output image using the weight database calculated by the process as in FIG. 29 . In blocks 3010 and 3020, the process receives the database and passes through the collection of light sensor values captured by the master lens optics used in computing the database. At
在各种示例实施方式中,使用涉及在傅立叶域中运算的再聚焦计算方法的某些方法,在频域中处理光数据。图20是结合另一示例实施方式示出一个这种方法的流程图。该算法的输入是表示为L(s,t,u,v)的离散4D光场2010,它表示从主透镜上(u,v)开始并在微透镜平面上(s,t)终止的光线(例如来自从图1的主透镜110并在微透镜阵列120的平面终止)。第一步是计算光场的离散4D傅立叶变换2020。称为M(ks,kt,ku,kv)的在(ks,kt,ku,kv)上的4D傅立叶变换值由以下方程定义:In various example implementations, the light data is processed in the frequency domain using certain methods involving refocus calculation methods operating in the Fourier domain. Figure 20 is a flowchart illustrating one such method in connection with another example embodiment. The input to the algorithm is a discrete 4D light field 2010 denoted L(s,t,u,v), which represents rays starting at (u,v) on the master lens and ending at (s,t) on the microlens plane (eg from the main lens 110 of FIG. 1 and terminate at the plane of the microlens array 120). The first step is to calculate the discrete 4D Fourier transform 2020 of the light field. The 4D Fourier transform value on (k s , k t , k u , k v ), called M(k s , k t , k u , k v ), is defined by the following equation:
其中exp函数是指数函数,exp(x)=ex。在一些实施方式中,在4D空间的直线网格上对离散光场进行采样,且使用快速傅立叶变换(FFT)算法计算傅立叶变换。Wherein the exp function is an exponential function, exp(x)=e x . In some embodiments, the discrete light field is sampled on a rectilinear grid in 4D space, and the Fourier transform is computed using a Fast Fourier Transform (FFT) algorithm.
下一步,对期望再聚焦图像的各个深度执行一次的是提取4D傅立叶变换的适当2D片2030,并计算提取片的2D傅立叶逆变换,它们是聚焦在不同深度的相片2040。对于函数G(kx,ky),2D傅立叶逆变换g(x,y)由以下方程定义:The next step, performed once for each depth of the desired refocused image, is to extract the appropriate 2D slices 2030 of the 4D Fourier transform, and to compute the inverse 2D Fourier transform of the extracted slices, which are photographs 2040 focused at different depths. For a function G(k x , ky ), the 2D inverse Fourier transform g(x, y) is defined by the following equation:
提取2D片上的值根据期望聚焦的深度确定。考虑期望全域焦平面的共轭平面(在透镜的图像侧),当该共轭平面与主透镜之间的间距为D而微透镜平面与主透镜之间的间距为F时,则坐标(kx,ky)中提取2D片的值由下式给出The values extracted on the 2D slice are determined according to the desired depth of focus. Consider the conjugate plane (on the image side of the lens) of the desired global focal plane, when the distance between this conjugate plane and the main lens is D and the distance between the microlens plane and the main lens is F, then the coordinates (k The value of the extracted 2D slice in x , ky ) is given by
G(kx,ky)=1/F2·M(kx(1-D/F),ky(1-D/F),kxD/F,kyD/F) (3)G(k x , k y )=1/F 2 M(k x (1-D/F), k y (1-D/F), k x D/F, k y D/F) (3 )
使用不同的方法,因离散化、重采样和傅立叶变换导致的伪像被选择性地改善。在一般信号处理时期中,当对信号进行采样时,它在双域(dual domain)中周期性地复制。当通过卷积重建该采样信号时,它在双域中与卷积滤波器的傅立叶变换相乘。这样,初始、中心复制品(replica)被分离出来,从而消除其它所有复制品。期望滤波器是4D sinc函数,sinc(s)sinc(t)sinc(u)sinc(v),其中sinc(x)=sin(πx)/(πx);然而,该函数具有无限外延。Artifacts due to discretization, resampling and Fourier transform are selectively improved using different methods. During normal signal processing, when a signal is sampled, it is replicated periodically in a dual domain. When this sampled signal is reconstructed by convolution, it is multiplied in the dual domain with the Fourier transform of the convolution filter. In this way, the original, central replica (replica) is separated, thereby eliminating all other replicas. The desired filter is the 4D sinc function, sinc(s)sinc(t)sinc(u)sinc(v), where sinc(x)=sin(πx)/(πx); however, this function has infinite extension.
在各种方法中,有限外延滤波器用于频域处理;这种滤波器可呈现可被选择性减轻的缺陷。图21A结合下文涉及减轻或这种缺陷的相应讨论示出关于具体1D滤波器的这些缺陷。图21A表示用1D线性插值实现(或作为4D四线性滤波器的基础)的三角滤波器。图21B示出三角滤波器方法的傅立叶变换,它是频带限制内的非单位值(参看2010),且随频率增加逐渐减小到更小的分数值。此外,该滤波器不是真正频带限制的,它包括在期望抑制频带外频率下的能量(2020)。In various approaches, finite extension filters are used for frequency domain processing; such filters can exhibit drawbacks that can be selectively mitigated. Figure 21A illustrates these deficiencies with respect to a particular 1D filter, in conjunction with the corresponding discussion below relating to mitigation or such deficiencies. Figure 21A shows a triangular filter implemented with 1D linear interpolation (or as the basis for a 4D quad-linear filter). Figure 21B shows the Fourier transform of the triangular filter method, which is non-unity in band-limit (see 2010), and tapers off to smaller fractional values with increasing frequency. Furthermore, the filter is not truly band-limited, it includes energy at frequencies outside the desired rejection band (2020).
上述第一缺陷导致可引起计算相片边界发暗的“倾斜(rolloff)伪像”。滤波器频谱随频率增加的衰减表示由该频谱调制的空间光场值也朝边界“倾斜”到分数值。The first drawback described above results in "rolloff artifacts" that can cause darkening of computed photo borders. The attenuation of the filter spectrum with increasing frequency means that the spatial light field values modulated by this spectrum also "tilt" towards the boundary to fractional values.
上述第二缺陷涉及与在频带限制以上频率的能量相关的计算相片中的混淆伪像(aliasing artifact)。外延超过频带限制的非零能量表示周期性复制品未被完全消除,从而导致两种混淆。第一,与限幅平面(slicing plane)平行出现的复制品作为侵占最终相片边界的2D复制品出现。第二,与该平面垂直定位的复制品被投影并累加到图像平面上,从而创建重像和对比度的损失。The second drawback described above concerns aliasing artifacts in the computed photographs related to energy at frequencies above the band limit. Nonzero energies for extensions beyond the band limit indicate that periodic replicas are not fully eliminated, leading to two confounds. First, replicas that appear parallel to the slicing plane appear as 2D replicas encroaching on the boundaries of the final photo. Second, replicas positioned perpendicular to this plane are projected and summed onto the image plane, creating ghosting and loss of contrast.
在一示例实施方式中,对上述倾斜型缺陷的校正通过将输入光场与滤波器反向傅立叶频谱的倒数相乘而得到消除,以抵消在重采样过程中引入的效应。在该示例实施方式中,相乘在算法的预处理步骤中进行4D傅立叶变换之前进行。虽然它校正了倾斜误差,但是预相乘可加重光场靠近其边界的能量,从而最大化重叠回期望视场作为混淆的能量。In an example embodiment, the correction for the aforementioned tilt-type defects is eliminated by multiplying the input optical field by the inverse of the filter's inverse Fourier spectrum to counteract the effects introduced during the resampling process. In this example implementation, the multiplication is performed prior to the 4D Fourier transform in the preprocessing step of the algorithm. While it corrects for tilt errors, premultiplication weights the energy of the lightfield near its boundaries, maximizing the energy that overlaps back to the desired field of view as aliasing.
三种抑制混淆伪像的方法—过采样(oversampling)、高级过滤(superiorfiltering)、和零填充(zero-padding)—在下述的各种示例实施方式中单独或结合使用。提取2D片内的过采样增加了空间域中的复制周期。这表示面内复制品尾部(tail)中的较小能量落在最终相片的边界内。增大一个域中的采样速率导致另一域中视场的增加。邻近复制品中的混淆能量落在这些外围区域中,并被修剪掉以隔离感兴趣的初始中心图像。Three methods of suppressing aliasing artifacts—oversampling, superior filtering, and zero-padding—are used alone or in combination in various example embodiments described below. Oversampling to extract 2D slices increases the replication period in the spatial domain. This means that the smaller energy in the tail of the in-plane replica falls within the boundaries of the final photo. Increasing the sampling rate in one domain results in an increase in the field of view in the other domain. Confounding energy in neighboring replicas falls in these peripheral regions and is pruned out to isolate the initial central image of interest.
减轻混淆的另一方法涉及尽可能近地近似完美频谱(可通过使用理想滤波器呈现)的有限外延滤波器。在一示例实施方式中,4D凯撒-贝塞尔(Kaiser-Bessel)可分离函数kb4(s,t,u,v)=kb(s)kb(t)kb(u)kb(v)用作滤波器,其中Another approach to aliasing mitigation involves a finite extension filter that approximates as closely as possible a perfect frequency spectrum (which can be represented by using an ideal filter). In an example implementation, the 4D Kaiser-Bessel separable function kb4(s,t,u,v)=kb(s)kb(t)kb(u)kb(v) is used as filter, where
在该方程中,I0是标准零阶校正的第一类凯撒-贝塞尔函数,W是期望滤波器的宽度,而P是取决于W的参数。在该示例实施方式中,W的值为5、4.5、4.0、3.5、3.0、2.5、2.0和1.5,且P的值分别为7.4302、6.6291、5.7567、4.9107、4.2054、3.3800、2.3934和1.9980。对于关于混淆的一般信息,以及对于关于结合一个或多个本发明示例实施方式减轻混淆的方法的具体信息,可以参照1997年的IEEETransactions on Medical Imaging的10卷3号473-478页J.L.Jackson、C.H.Meyer、D.G.Nishimura和A.Macovski的“Selection of convolution function for Fourierinversion using gridding(使用网格化来选择逆向傅立叶的卷积函数)”,该文献通过引用整体结合于此。在一实现中,实现小于约2.5的宽度“W”以达到期望的图像质量。In this equation, I0 is the standard zero-order corrected Kaiser-Bessel function of the first kind, W is the width of the desired filter, and P is a parameter that depends on W. In this example embodiment, the values of W are 5, 4.5, 4.0, 3.5, 3.0, 2.5, 2.0, and 1.5, and the values of P are 7.4302, 6.6291, 5.7567, 4.9107, 4.2054, 3.3800, 2.3934, and 1.9980, respectively. For general information on obfuscation, and for specific information on methods for mitigating obfuscation in conjunction with one or more example embodiments of the present invention, reference may be made to JL Jackson, CH Meyer, 1997, IEEE Transactions on Medical Imaging, Vol. DGNishimura and A. Macovski, "Selection of convolution function for Fourierinversion using gridding," which is hereby incorporated by reference in its entirety. In one implementation, a width "W" of less than about 2.5 is achieved to achieve desired image quality.
在另一示例实施方式中,混淆通过在预相乘之前填充具有较小零值边界的光场并进行傅立叶变换得以减轻。这将能量稍微推离边界,并最小化由倾斜校正的预相乘引起的混淆能量放大。In another example embodiment, aliasing is mitigated by padding the light field with smaller zero-valued boundaries and Fourier transforming prior to pre-multiplication. This pushes the energy slightly out of bounds and minimizes the amplification of aliased energy caused by tilt-corrected premultiplication.
图22是根据本发明另一示例实施方式示出使用上述各种校正在频域中再聚焦的方法的流程图。在框2210,接收离散4D光场。在每个输入光场进行一次的预处理阶段,框2215检查是否混淆减小是所期望的,且如果是则执行用小零值边界(例如在该维度上宽度的5%)填充光场的框2220。在框2225,进行检查以确定是否倾斜校正是所期望的,且如果是则在框2230通过重采样滤波器的傅立叶变换的倒数对光场进行调制。在预处理阶段的最后框中,在框2240计算光场的4D傅立叶变换。FIG. 22 is a flowchart illustrating a method of refocusing in the frequency domain using the above-mentioned various corrections according to another exemplary embodiment of the present invention. At
在每个期望焦距进行一次的再聚焦阶段,该过程在框2250诸如通过用户的引导接收再聚焦图像的期望焦距。在框2260,进行检查以确定是否混淆减小是所期望的。如果否,则框2270使用期望4D重采样滤波器提取光场傅立叶变换的2D片,其中2D片的轨迹对应于期望焦距;且框2275计算所提取片的2D傅立叶逆变换并进行到框2290。如果在框2260混淆减小是所期望的,则过程进行到框2280,其中使用期望4D重采样滤波器和过采样(例如在两个维度的每一个上的2x过采样)提取2D片。在框2283,计算片的2D傅立叶逆变换并将得到的图像修剪成初始尺寸而无需在框2286过采样,在框2286之后过程进行到框2290。在框2290,进行检测以确定再聚焦是否完成。如果否,则在框2250选择另一焦距且过程如上所述地进行。如果再聚焦完成,则过程在框2295退出。In a refocusing phase performed once per desired focal length, the process receives desired focal lengths for the refocused image at
通过如以上可选实施方式所述的明确地对光线求和,该频域算法的渐进计算复杂性小于再聚焦。假设输入离散光场在其四个维度的每一个上具有N个样本。则对于在各个新深度再聚焦,明确地对光线求和的算法的计算复杂性是O(N4)。对于在各个新深度再聚焦,频域算法的计算复杂性为O(N2logN),主要是2D傅立叶逆变换的成本。然而,预处理步骤对于各个新光场数据集消耗O(N4logN)。By summing the rays explicitly as described in the alternative embodiment above, the asymptotic computational complexity of the frequency domain algorithm is less than refocusing. Suppose the input discrete light field has N samples in each of its four dimensions. The computational complexity of the algorithm explicitly summing the rays is then O( N4 ) for refocusing at each new depth. For refocusing at each new depth, the computational complexity of the frequency domain algorithm is O(N 2 logN), mainly the cost of the 2D inverse Fourier transform. However, the preprocessing step consumes O(N 4 logN) for each new light field dataset.
在另一示例实施方式中,所捕获的光线被光学滤除。虽然不限于这种应用,但是这种滤波器的一些示例是中性密度滤波器、彩色滤波器、偏振滤波器。任何现有滤波器或将要开发的滤波器可用于对光线进行所需的过滤。在一个实现中,光线被成组或单独地光学滤除,使得各组或个别光被不同地滤除。在另一实现中,通过使用附加于主透镜的空间变化滤波器实现过滤。在一示例应用中,诸如中性密度梯度滤波器的梯度滤波器用于过滤光。在另一实现中,空间变化滤波器在光线传感器、微透镜阵列或光传感器阵列中一个或多个之前使用。参照图1,作为示例,选择性地将一个或多个这样的滤波器置于主透镜110、微透镜阵列120和光传感器阵列130中一个或多个之前。In another example embodiment, the captured light is optically filtered. While not limited to this application, some examples of such filters are neutral density filters, color filters, polarizing filters. Any existing filter or filter to be developed can be used to perform the desired filtering of the light. In one implementation, the light rays are optically filtered in groups or individually such that each group or individual light is filtered differently. In another implementation, filtering is achieved by using a spatially varying filter attached to the main lens. In an example application, a gradient filter such as a neutral density gradient filter is used to filter the light. In another implementation, a spatially varying filter is used prior to one or more of the light sensor, microlens array, or light sensor array. Referring to FIG. 1 , one or more such filters are selectively placed before one or more of main lens 110 , microlens array 120 , and photosensor array 130 , as an example.
在本发明的另一示例实施方式中,编程诸如处理器的计算部件以选择性地选择组合在计算输出像素中的光线,以便于对该像素值进行期望的净过滤。作为示例,考虑涉及主透镜处的光学中性梯度密度滤波器,在微透镜下出现的各个透镜光圈图像根据跨越其范围的滤波器梯度加权。在一实现中,通过选择在各个微透镜下处于与输出图像像素的中性密度过滤的期望水平匹配的梯度的点上的光传感器,计算输出图像。例如,为了产生其中各个像素在较大的范围上过滤的图像,各个像素值被设定为在相应微透镜下处于与最大过滤对应的梯度极端的传感器值。In another example embodiment of the invention, a computational component, such as a processor, is programmed to selectively select the rays combined in a computational output pixel to facilitate a desired net filtering of the pixel value. As an example, consider an optically neutral gradient density filter at the master lens, with each lens aperture image appearing under the microlens weighted according to the filter gradient across its range. In one implementation, the output image is computed by selecting the photosensors under each microlens at a point on a gradient that matches the desired level of neutral density filtering of the output image pixels. For example, to produce an image in which individual pixels are filtered over a larger range, each pixel value is set to the sensor value at the extreme of the gradient corresponding to maximum filtering under the corresponding lenticule.
图2是结合本发明的其它示例实施方式示出处理图像的方法的数据流程图。图像传感器装置210以与例如图1中示出且在上文描述的微透镜阵列120和光传感器阵列130类似的方式使用微透镜/光传感器芯片装置212采集图像数据。图像传感器装置210光学地包括承载某一处理电路的集成处理电路214,以准备采集图像数据以便传输。FIG. 2 is a data flow diagram illustrating a method of processing an image in connection with other example embodiments of the invention. Image sensor device 210 acquires image data using microlens/light sensor chip device 212 in a manner similar to microlens array 120 and photosensor array 130 shown, for example, in FIG. 1 and described above. Image sensor device 210 optically includes integrated processing circuitry 214 carrying some processing circuitry in preparation for acquiring image data for transmission.
在图像传感器装置210上产生的传感器数据被传输到信号处理器220。该信号处理器包括低分辨率图像处理器222以及压缩处理器224和(光)线方向处理器226之一或两者;这些处理器的每一个取决于应用单独实现或使用共同处理器功能地实现。此外,各个如图2所示的处理器选择性地编程有结合其它附图或本文其它段落描述的一个或多个处理功能。信号处理器220与图像传感器装置210任选地在共同器件或部件上实现,例如在共同电路和/或在共同图像器件上。The sensor data generated on image sensor device 210 are transmitted to signal processor 220 . The signal processors include a low-resolution image processor 222 and one or both of a compression processor 224 and a (light) ray direction processor 226; each of these processors is implemented individually or using a common processor function depending on the application accomplish. In addition, each of the processors shown in FIG. 2 is selectively programmed with one or more processing functions described in connection with other figures or other paragraphs herein. The signal processor 220 and the image sensor arrangement 210 are optionally implemented on a common device or component, eg on a common circuit and/or on a common image device.
低分辨率图像处理器222使用从图像传感器装置210接收的传感器数据来产生低分辨率图像数据,该低分辨率图像数据被发送到探视显示器230。诸如相机或摄像机上按钮的输入器件235发送图像捕获请求到信号处理器220,以请求例如捕获在探视显示器230中显示的特定图像和/或在如此实现时开始视频成像。Low-resolution image processor 222 uses the sensor data received from image sensor device 210 to generate low-resolution image data, which is sent to view-through display 230 . An input device 235 such as a camera or a button on a camcorder sends an image capture request to the signal processor 220 to request, for example, to capture a particular image displayed in the view-through display 230 and/or to initiate video imaging when so implemented.
响应于图像捕获请求或其它向导,信号处理器220使用由图像传感器装置210捕获的传感器数据来产生经处理的传感器数据。在一些应用中,压缩处理器224被实现成产生转移到数据存储装置240(例如存储器)的压缩原数据。然后,这种原数据在信号处理器220和/或外部计算机260或其它处理装置选择性地进行处理,从而实现诸如用如下所述的光线方向处理器226实现的光线方向处理。In response to an image capture request or other guidance, signal processor 220 uses sensor data captured by image sensor device 210 to generate processed sensor data. In some applications, compression processor 224 is implemented to generate compressed raw data that is transferred to data storage 240 (eg, memory). This raw data is then optionally processed at signal processor 220 and/or external computer 260 or other processing means to enable ray direction processing, such as with ray direction processor 226 as described below.
在某些应用中,光线方向处理器226被实现成处理在信号处理器220接收的处理器数据,以重新排列用于产生聚焦和/或校正图像数据的传感器数据。光线方向处理器226使用从图像传感器装置210接收的传感器数据和传送到数据存储装置240的原数据之一或两者。在这些应用中,光线方向处理器226使用特定成像装置(例如相机、摄像机或移动电话)的光线映射特性,在这些成像装置中图像传感器装置210被实现成确定用微透镜/光传感器芯片212感测的光线的重新排列。将用光线方向处理器226产生的图像数据发送到数据存储装置240和/或到通信链路250用于各种应用,诸如使图像数据流传送到远程场所或将图像数据发送到远程场所。In some applications, light direction processor 226 is implemented to process processor data received at signal processor 220 to rearrange sensor data for use in generating focused and/or corrected image data. Light direction processor 226 uses one or both of sensor data received from image sensor device 210 and raw data transmitted to data storage device 240 . In these applications, the ray direction processor 226 uses the ray mapping characteristics of the particular imaging device (such as a camera, video camera, or mobile phone) where the image sensor device 210 is implemented to determine the Rearrangement of measured rays. Image data generated with ray direction processor 226 is sent to data storage device 240 and/or to communication link 250 for various applications, such as streaming or sending the image data to a remote location.
在一些应用中,集成处理电路214通过实现例如CMOS型处理器或具有适当功能的其它处理器包括信号处理器220的处理功能的一些或全部。例如,低分辨率图像处理器222选择性地包括有集成处理电路214,且低分辨率图像数据从图像传感器装置210直接发送到探视显示器230。类似地,压缩处理器224或其类似功能选择性地用集成处理电路214实现。In some applications, integrated processing circuitry 214 includes some or all of the processing functionality of signal processor 220 by implementing, for example, a CMOS-type processor or other processor of suitable functionality. For example, low resolution image processor 222 optionally includes integrated processing circuit 214 and low resolution image data is sent directly from image sensor device 210 to view-through display 230 . Similarly, compression processor 224 or similar functionality is optionally implemented with integrated processing circuitry 214 .
在一些应用中,最终图像的计算可在集成处理电路214上进行(例如在一些仅输出最终图像的数字静态相机中)。在其它应用中,图像传感器装置210可将原始光线数据或这些数据的压缩版简单地发送到诸如桌上型计算机的外部计算装置。根据这些数据计算最终图像然后在外部装置上进行。In some applications, the calculation of the final image may be performed on the integrated processing circuit 214 (eg, in some digital still cameras that only output the final image). In other applications, image sensor device 210 may simply send raw light data or a compressed version of such data to an external computing device such as a desktop computer. Computation of the final image from these data is then performed on an external device.
图3是根据本发明另一示例实施方式的处理图像数据的方法的流程图。在框310,使用主透镜或具有诸如图1所示的微透镜/光传感器阵列的透镜组,在相机或其它成像装置上捕捉图像数据。如果在框320预览图像是所期望的,则在框330使用例如探视镜或其它类型的显示器产生预览图像。使用捕获的图像数据的子集在例如相机或摄像机的探视镜上显示该预览图像。FIG. 3 is a flowchart of a method of processing image data according to another example embodiment of the present invention. At
在框340,来自光传感器阵列的原数据被处理并压缩以供使用。在框350,从经处理和压缩的数据中提取光线数据。该提取涉及例如检测入射到光传感器阵列中特定光传感器的光线束或集合。在框360对于图像数据捕获其中的成像装置检索光线映射数据。在框370,光线映射数据和所提取的光线数据用于合成重新排列图像。例如,提取、映射和合成框350-370通过确定光线被采集场景的特定像素的一束光线,并积分光线能量以合成特定像素的值来选择性地实现。在一些应用中,光线映射数据用于通过用于获取图像数据的实际透镜追踪各个特定像素的光线。例如,通过在框370确定累加在一起以便于聚焦在特定焦距处的所选目标的适当光线集合,该光线可被重新排列以到达聚焦图像。类似地,通过确定适当的光线安排以校正诸如成像装置中的透镜像差的状况,该光线可被重新排列以产生相对而言没有相关于像差或其它状况的特性的图像。At
各种方法选择性地用于产生相机型和其它应用的预览图像。图4是根据本发明另一示例实施方式的产生这种预览图像的过程流程图。图4中示出且在下文描述的方法可与例如在图3的框330产生预览图像相结合地实现。Various methods are optionally used to generate preview images for camera-type and other applications. FIG. 4 is a flowchart of a process for generating such a preview image according to another example embodiment of the present invention. The method shown in FIG. 4 and described below may be implemented in conjunction with generating a preview image, eg, at
在框410接收对原始传感器数据的预览指令。在框420,将中心像素从原始传感器图像数据中的各个微透镜图像选出。在框430收集所选中心像素以形成高景深图像。在框440,以适于探视显示器的分辨率对高景深图像进行下采样。参照图2,作为示例,这种下采样在一个或多个图像传感器装置210或信号处理器220上选择性地进行。在框450将产生的预览图像数据发送到探视显示器,而在框460,探视镜用预览图像数据显示图像。A preview instruction for raw sensor data is received at
图5是根据本发明另一示例实施方式的处理和压缩图像数据的流程图。图5中示出且在下文描述的方法可与在图3中框340处理和压缩图像数据相结合地实现。当用如图2所示的装置实现时,图5所示的方法可在例如图像传感器装置210和信号处理器220之一或两者实现。FIG. 5 is a flowchart of processing and compressing image data according to another example embodiment of the present invention. The method shown in FIG. 5 and described below may be implemented in conjunction with processing and compressing image data at
在框510,从传感器阵列接收原始图像数据。如果在框520期望进行着色,则在框530对彩色滤波器阵列值进行去马赛克,以在传感器处产生色彩。如果在框540期望进行调整和对齐,则微透镜在框550被调整并与光传感器阵列对齐。如果在框560期望进行插值,则在570将像素值插值到与各个微透镜关联的整数个像素。在框580,经处理的原始图像数据被压缩并呈现以便于合成处理(例如形成再聚焦和/或校正图像)。At
图6是根据本发明另一示例实施方式的图像合成的流程图。图6中示出并在下文中描述的方法可与在图3的框370示出并在下文中进一步描述的图像合成方法相结合地实现。FIG. 6 is a flowchart of image synthesis according to another example embodiment of the present invention. The method shown in FIG. 6 and described below may be implemented in conjunction with the image synthesis method shown at
在框610,从光传感器阵列接收原始图像数据。如果在框620期望进行再聚焦,则在框630使用例如本文所述的方法对图像数据进行再聚焦,以选择性重新排列由原始图像数据表示的光。如果在框640期望进行图像校正,则在框650校正图像数据。在不同应用中,在期望进行再聚焦和图像校正两者的应用中,在框650的图像校正之前或在与框630的再聚焦同时进行。在框660,使用包括经再聚焦和经校正数据的经处理图像数据(可用时)产生所得图像。At
图7A是根据本发明另一示例实施方式的使用透镜装置进行图像再聚焦的流程图。在图7示出且在下文中描述的方法可例如与图6中框630的图像数据再聚焦结合地实现。FIG. 7A is a flowchart of image refocusing using a lens arrangement according to another example embodiment of the present invention. The method shown in FIG. 7 and described hereinafter may be implemented, for example, in conjunction with image data refocusing at
在框710,选择用于再聚焦图像部分的虚焦平面。在框720,选择虚焦平面的虚像像素。如果在框730期望进行校正(例如对于透镜像差),则在框740计算在所选像素与各个特定透镜位置之间通过的虚光线(或虚光线集合)值。在一应用中,通过计算落在所选像素上的共轭光线并通过透镜装置中的路径追踪该光线来便于该计算。At
在框750,累加特定焦平面的各个透镜位置的光线值(或虚光线集合)之和以确定所选像素的总值。在一些应用中,在框750累加的和是加权和,其中某些光线(或光线集合)给定比其它的更大的权重。如果在框760存在用于再聚焦的附加像素,则在框720选择另一像素且继续该过程直到没有像素需要再聚焦。在像素已被再聚焦之后,在框770组合像素数据以产生在框710选择的虚焦平面处的再聚焦虚像。涉及图7中框720、730、740和750的部分或全部的再聚焦方法通过各种应用的更具体功能进行实施。At
使用本文所述的一个或多个示例实施方式实现的传感器数据处理电路取决于实现可包括一个或多个微处理器、专用集成电路(ASIC)、数字信号处理器(DSP)和/或可编程门阵列(例如现场可编程门阵列(FPGA))。这样,传感器数据处理电路可以是现在已知或以后开发的任何类型或形式的电路。例如,传感器数据处理电路可包括耦合在一起实现、提供和/或进行期望操作/功能/应用的有源和/或无源的单个部件或多个部件(微处理器、ASIC和DSP)。Sensor data processing circuitry implemented using one or more of the example embodiments described herein may, depending on the implementation, include one or more microprocessors, application specific integrated circuits (ASICs), digital signal processors (DSPs), and/or programmable Gate arrays such as Field Programmable Gate Arrays (FPGAs). As such, the sensor data processing circuitry may be of any type or form of circuitry now known or later developed. For example, a sensor data processing circuit may comprise a single component or multiple components (microprocessor, ASIC and DSP) coupled together to implement, provide and/or perform a desired operation/function/application, active and/or passive.
在不同应用中,传感器数据处理电路实现或执行实现本文所述和/或示出的特定方法、任务或运算的一个或多个应用、例程、程序和/或数据结构。应用、例程或程序的功能被选择性地组合或分配在某些应用中。在一些应用中,应用、例程或程序通过传感器(或其它)数据处理电路使用已知或以后开发的各种编程语言中的一种或多种实现。这种编程语言包括例如结合本发明一个或多个方面选择性地实现的编译或未编译的FORTRAN、C、C++、Java和BASIC。In various applications, sensor data processing circuitry implements or executes one or more applications, routines, programs and/or data structures that implement particular methods, tasks or operations described and/or illustrated herein. Functions of applications, routines, or programs are selectively combined or allocated in some applications. In some applications, applications, routines or programs are implemented by sensor (or other) data processing circuitry using one or more of a variety of programming languages known or later developed. Such programming languages include, for example, FORTRAN, C, C++, Java, and BASIC, optionally implemented in conjunction with one or more aspects of the invention, compiled or uncompiled.
上述各种实施方式仅作为说明提供而不应该解释为对本发明的限制。根据以上描述和说明,对本领域技术人员显而易见的是可对本发明进行各种更改和改变,而不必严格遵循本文示出和描述的示例性实施方式和应用。例如,这种变化可包括在不同类型的应用中实现各种光学成像应用和器件、增加或减少每个像素(或其它所选图像区域)所采集的光线数、或实现与所述示例不同的算法和/或方程以采集或处理图像数据。其它变化可包括使用除了笛卡尔坐标以外或者附加于笛卡尔坐标的坐标表示,例如极坐标。这种更改和变化不背离本发明的实质精神和范围。The various embodiments described above are provided for illustration only and should not be construed as limiting the present invention. From the above description and illustrations it will be apparent to those skilled in the art that various modifications and changes can be made in the present invention without strictly following the exemplary embodiments and applications shown and described herein. For example, such changes may include implementing various optical imaging applications and devices in different types of applications, increasing or decreasing the number of rays collected per pixel (or other selected image area), or implementing different Algorithms and/or equations to acquire or process image data. Other variations may include the use of coordinate representations in addition to or in addition to Cartesian coordinates, such as polar coordinates. Such modifications and changes are made without departing from the true spirit and scope of the invention.
Claims (12)
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US61517904P | 2004-10-01 | 2004-10-01 | |
US60/615,179 | 2004-10-01 | ||
US64749205P | 2005-01-27 | 2005-01-27 | |
US60/647,492 | 2005-01-27 |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB200580039822XA Division CN100556076C (en) | 2004-10-01 | 2005-09-30 | Imaging device and method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101426085A CN101426085A (en) | 2009-05-06 |
CN101426085B true CN101426085B (en) | 2012-10-03 |
Family
ID=38965761
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2008101691410A Expired - Fee Related CN101426085B (en) | 2004-10-01 | 2005-09-30 | Imaging arrangements and methods therefor |
CNB200580039822XA Expired - Fee Related CN100556076C (en) | 2004-10-01 | 2005-09-30 | Imaging device and method thereof |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB200580039822XA Expired - Fee Related CN100556076C (en) | 2004-10-01 | 2005-09-30 | Imaging device and method thereof |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN101426085B (en) |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4941332B2 (en) * | 2008-01-28 | 2012-05-30 | ソニー株式会社 | Imaging device |
JP5472584B2 (en) * | 2008-11-21 | 2014-04-16 | ソニー株式会社 | Imaging device |
JP4706882B2 (en) * | 2009-02-05 | 2011-06-22 | ソニー株式会社 | Imaging device |
JP5463718B2 (en) * | 2009-04-16 | 2014-04-09 | ソニー株式会社 | Imaging device |
EP2244484B1 (en) * | 2009-04-22 | 2012-03-28 | Raytrix GmbH | Digital imaging method for synthesizing an image using data recorded with a plenoptic camera |
JP5515396B2 (en) * | 2009-05-08 | 2014-06-11 | ソニー株式会社 | Imaging device |
DK2442720T3 (en) | 2009-06-17 | 2016-12-19 | 3Shape As | Focus scan devices |
DE102009027372A1 (en) | 2009-07-01 | 2011-01-05 | Robert Bosch Gmbh | Camera for a vehicle |
JP6149339B2 (en) * | 2010-06-16 | 2017-06-21 | 株式会社ニコン | Display device |
JP2012205111A (en) * | 2011-03-25 | 2012-10-22 | Casio Comput Co Ltd | Imaging apparatus |
TW201322048A (en) * | 2011-11-25 | 2013-06-01 | Cheng-Xuan Wang | Field depth change detection system, receiving device, field depth change detecting and linking system |
JP5913934B2 (en) * | 2011-11-30 | 2016-05-11 | キヤノン株式会社 | Image processing apparatus, image processing method and program, and imaging apparatus having image processing apparatus |
JP5871625B2 (en) * | 2012-01-13 | 2016-03-01 | キヤノン株式会社 | IMAGING DEVICE, ITS CONTROL METHOD, AND IMAGING SYSTEM |
US9386297B2 (en) * | 2012-02-24 | 2016-07-05 | Casio Computer Co., Ltd. | Image generating apparatus generating reconstructed image, method, and computer-readable recording medium |
JP2013198016A (en) * | 2012-03-21 | 2013-09-30 | Casio Comput Co Ltd | Imaging apparatus |
JP5459337B2 (en) | 2012-03-21 | 2014-04-02 | カシオ計算機株式会社 | Imaging apparatus, image processing method, and program |
JP5914192B2 (en) | 2012-06-11 | 2016-05-11 | キヤノン株式会社 | Imaging apparatus and control method thereof |
US9398264B2 (en) | 2012-10-19 | 2016-07-19 | Qualcomm Incorporated | Multi-camera system using folded optics |
KR20140094395A (en) * | 2013-01-22 | 2014-07-30 | 삼성전자주식회사 | photographing device for taking a picture by a plurality of microlenses and method thereof |
KR20150006755A (en) * | 2013-07-09 | 2015-01-19 | 삼성전자주식회사 | Image generating apparatus, image generating method and non-transitory recordable medium |
CN103417181B (en) * | 2013-08-01 | 2015-12-09 | 北京航空航天大学 | A kind of endoscopic method for light field video camera |
US10178373B2 (en) | 2013-08-16 | 2019-01-08 | Qualcomm Incorporated | Stereo yaw correction using autofocus feedback |
JP6238657B2 (en) * | 2013-09-12 | 2017-11-29 | キヤノン株式会社 | Image processing apparatus and control method thereof |
WO2015118120A1 (en) | 2014-02-07 | 2015-08-13 | 3Shape A/S | Detecting tooth shade |
JP2015185998A (en) * | 2014-03-24 | 2015-10-22 | 株式会社東芝 | Image processing device and imaging apparatus |
US9613417B2 (en) * | 2015-03-04 | 2017-04-04 | Ricoh Company, Ltd. | Calibration of plenoptic imaging systems using fourier transform |
CN106303208B (en) * | 2015-08-31 | 2019-05-21 | 北京智谷睿拓技术服务有限公司 | Image Acquisition control method and device |
CN106303209B (en) * | 2015-08-31 | 2019-06-21 | 北京智谷睿拓技术服务有限公司 | Image Acquisition control method and device |
CN106303210B (en) * | 2015-08-31 | 2019-07-12 | 北京智谷睿拓技术服务有限公司 | Image Acquisition control method and device |
WO2016177914A1 (en) * | 2015-12-09 | 2016-11-10 | Fotonation Limited | Image acquisition system |
EP3182697A1 (en) * | 2015-12-15 | 2017-06-21 | Thomson Licensing | A method and apparatus for correcting vignetting effect caused on an image captured by lightfield cameras |
GB201602836D0 (en) * | 2016-02-18 | 2016-04-06 | Colordyne Ltd | Lighting device with directable beam |
EP3548935B1 (en) * | 2016-12-05 | 2024-08-28 | Photonic Sensors & Algorithms, S.L. | Light field acquisition device |
CN109708193A (en) * | 2018-06-28 | 2019-05-03 | 永康市胜时电机有限公司 | Heating device inlet valve aperture control platform |
CN108868213B (en) * | 2018-08-20 | 2020-05-15 | 浙江大丰文体设施维保有限公司 | Stage disc immediate maintenance analysis mechanism |
WO2020194025A1 (en) * | 2019-03-22 | 2020-10-01 | Universita' Degli Studi Di Bari Aldo Moro | Process and apparatus for the capture of plenoptic images between arbitrary planes |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5282045A (en) * | 1990-04-27 | 1994-01-25 | Hitachi, Ltd. | Depth-of-field control apparatus and image pickup apparatus having the same therein |
US5610390A (en) * | 1994-10-03 | 1997-03-11 | Fuji Photo Optical Co., Ltd. | Solid-state image pickup device having microlenses each with displaced optical axis |
US5757423A (en) * | 1993-10-22 | 1998-05-26 | Canon Kabushiki Kaisha | Image taking apparatus |
CN2394240Y (en) * | 1999-02-01 | 2000-08-30 | 王德胜 | TV image magnifier |
US6320979B1 (en) * | 1998-10-06 | 2001-11-20 | Canon Kabushiki Kaisha | Depth of field enhancement |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5672575A (en) * | 1979-11-19 | 1981-06-16 | Toshiba Corp | Picture input unit |
NO305728B1 (en) * | 1997-11-14 | 1999-07-12 | Reidar E Tangen | Optoelectronic camera and method of image formatting in the same |
-
2005
- 2005-09-30 CN CN2008101691410A patent/CN101426085B/en not_active Expired - Fee Related
- 2005-09-30 CN CNB200580039822XA patent/CN100556076C/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5282045A (en) * | 1990-04-27 | 1994-01-25 | Hitachi, Ltd. | Depth-of-field control apparatus and image pickup apparatus having the same therein |
US5757423A (en) * | 1993-10-22 | 1998-05-26 | Canon Kabushiki Kaisha | Image taking apparatus |
US5610390A (en) * | 1994-10-03 | 1997-03-11 | Fuji Photo Optical Co., Ltd. | Solid-state image pickup device having microlenses each with displaced optical axis |
US6320979B1 (en) * | 1998-10-06 | 2001-11-20 | Canon Kabushiki Kaisha | Depth of field enhancement |
CN2394240Y (en) * | 1999-02-01 | 2000-08-30 | 王德胜 | TV image magnifier |
Also Published As
Publication number | Publication date |
---|---|
CN100556076C (en) | 2009-10-28 |
CN101065955A (en) | 2007-10-31 |
CN101426085A (en) | 2009-05-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101426085B (en) | Imaging arrangements and methods therefor | |
US9479685B2 (en) | Imaging arrangements and methods therefor | |
US10735635B2 (en) | Capturing and processing of images captured by camera arrays incorporating cameras with telephoto and conventional lenses to generate depth maps | |
US10375319B2 (en) | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array | |
US20130107085A1 (en) | Correction of Optical Aberrations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C53 | Correction of patent for invention or patent application | ||
CB02 | Change of applicant information |
Address after: American California Applicant after: The Board of Trustees of the Leland Stanford Junior University Address before: American California Applicant before: Univ Leland Stanford Junior |
|
COR | Change of bibliographic data |
Free format text: CORRECT: APPLICANT; FROM: UNIV LELAND STANFORD JUNIOR TO: THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY |
|
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20121003 Termination date: 20190930 |