US20110216157A1 - Object Detection and Rendering for Wide Field of View (WFOV) Image Acquisition Systems - Google Patents
Object Detection and Rendering for Wide Field of View (WFOV) Image Acquisition Systems Download PDFInfo
- Publication number
- US20110216157A1 US20110216157A1 US12/959,137 US95913710A US2011216157A1 US 20110216157 A1 US20110216157 A1 US 20110216157A1 US 95913710 A US95913710 A US 95913710A US 2011216157 A1 US2011216157 A1 US 2011216157A1
- Authority
- US
- United States
- Prior art keywords
- image
- wfov
- classifiers
- original
- view
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 34
- 238000009877 rendering Methods 0.000 title description 3
- 238000012937 correction Methods 0.000 claims abstract description 16
- 238000000034 method Methods 0.000 claims description 52
- 238000013507 mapping Methods 0.000 claims description 45
- 238000012545 processing Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 10
- 238000003384 imaging method Methods 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000012805 post-processing Methods 0.000 description 4
- 230000004075 alteration Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 241001326510 Phacelia sericea Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/18—Arrangements with more than one light path, e.g. for comparing two specimens
- G02B21/20—Binocular arrangements
- G02B21/22—Stereoscopic arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
Definitions
- Face detection methods have become very well established within digital cameras in recent years. This technology brings a range of benefits including enhanced acquisition of the main image and adaptation of the acquisition process to optimized image appearance and quality based on the detected faces.
- WFOV imaging systems are also used in a range of applications including Google's “street-view” technology and for some video-phone systems where they enable a number of people sitting at a table to be imaged by a single sensor and optical system.
- Mr. Lagarde points out that 180° panoramic images require large screen real-estate. Reduced to a more usual size, Mr. Lagarde presents the examples illustrated at FIGS. 1A-1G . While panoramic images are typically difficult to appraise, displaying them in a narrow window has generally been avoided, and instead a 1280 ⁇ 1024 screen or “larger” and a fast Internet connection may be typically recommended.
- FIGS. 1A-1G show the Préfecture building in Grenoble, France
- Rectilinear/cylindric/equirectangular selection made easy, and that different but acceptable panoramic images can result from stitching the same source images and then using different projection modes is implied here and there.
- FIG. 1A illustrates Piazza Navona, Roma by Gaspar Van Wittel, 1652-1736 (Museo Thyssen-Bornemisza, Madrid).
- Mr. Lagarde indicates that most photographers restrict themselves to subjects which can be photographed with a rectilinear lens (plane projection). A small number of them sometimes use a fisheye lens (spherical projection) or a rotating lens camera (cylindrical projection) or a computer (stitcher programs make use of various projection modes), but when the field of view (horizontal FOV and/or vertical FOV) is higher than 90 degrees (or about, this actually depends on the subject) they are disturbed by the “excessive wide-angle distortion” found in the resulting images.
- FIG. 1B an image is shown that is a 180° panorama where cylindrical projection mode is used to show a long building viewed from a short distance. Most people dislike images like this one, where except for the horizon, every straight horizontal line is heavily curved.
- FIG. 1C illustrates an attempt to use the rectilinear projection mode: every straight line in the buildings is rendered as a straight line. But, while rectilinear projection works well when field of view is lower than 90 degrees, it should never be used when field of view is larger than 120 degrees. In this image, though the field of view was restricted to 155 degree (original panorama corresponds to 180°), the stretching is too high in the left and right parts and the result utterly unacceptable.
- FIG. 1G This view can be compared with the example of FIG. 1B on the top of this page: each one shows exactly the same buildings and cars, and each comes from exactly the same source images.
- FIGS. 1B-1G are located on the sides of a large square but, because there are many large trees on this square, standing back enough for a large field of view is not possible.
- the image shown in FIG. 1B illustrates photos that were actually taken at a rather short distance from the main building, while FIG. 1G suggests the viewer being much more distant from this building.
- FIGS. 1A-1G illustrate various conventional attempts to avoid distortion in images with greater than 90° field of view.
- FIG. 2 schematically illustrates a wide field of view (WFOV) system that in one embodiment incorporates a face tracker.
- WFOV wide field of view
- FIG. 3( a ) illustrates a wide horizontal scene mapped onto a full extent of an image sensor.
- FIG. 3( b ) illustrates a wide horizontal scene not mapped onto a full extent of an image sensor, and instead a significant portion of the sensor is not used.
- FIG. 4 illustrates the first four Haar classifiers used in face detection.
- FIGS. 4( a )- 4 ( c ) illustrate magnification of a person speaking among a group of persons within a WDOF image.
- FIGS. 5( a )- 5 ( c ) illustrate varying the magnification of a person speaking among a group of persons within a WDOF image, wherein the degree of magnification may vary depending on the strength or loudness of the speaker's voice.
- An image acquisition device having a wide field of view includes at least one lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°.
- the device also includes a control module and an object detection engine that includes one or more cascades of regular object classifiers.
- a WFoV correction engine of the device is configured to correct distortion within the original image.
- the WFoV correction engine processes raw image data of the original WFoV image.
- a rectilinear projection of center pixels of the original WFoV image is applied.
- a cylindrical projection of outer pixels of the original WFoV image is also applied. Modified center and outer pixels are combined to generate a distortion-corrected WFoV image.
- One or more objects located within the center or outer pixels, or both, of the distortion-corrected WFoV image are detectable by the object detection engine upon application of the one or more cascades of regular object classifiers.
- the applying of the rectilinear projection to center pixels may also include applying a regular rectilinear projection to an inner portion of the center pixels and a squeezed rectilinear projection to an outer portion of the center pixels.
- the applying of the squeezed rectilinear projection to the outer portion of the center pixels may also include applying an increasingly squeezed rectilinear projection in a direction from a first boundary with the inner portion of the center pixels to a second boundary with the outer pixels.
- the device includes at least one lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°, a control module, and an object detection engine that includes one or more cascades of modified object classifiers.
- the modified object classifiers include a first subset of rectilinear classifiers to be applied to objects appearing in center pixels of the WFoV image, and a second subset of cylindrical classifiers to be applied to objects appearing in outer pixels of the WFoV image.
- One or more objects located within the center or outer pixels, or both, of the original WFoV image are detectable by the object detection engine upon application of the one or more cascades of modified object classifiers, including the first subset of rectilinear classifiers and the second subset of cylindrical classifiers, respectively.
- the first subset of rectilinear classifiers may include a subset of regular rectilinear classifiers with which objects appearing in an inner portion of the center pixels are detectable, and a subset of squeezed rectilinear classifiers with which objects appearing in an outer portion of the center pixels are detectable.
- the subset of squeezed rectilinear classifiers may include subsets of increasingly squeezed rectilinear classifiers with which objects appearing in the outer portion of the center pixels are increasingly detectable in a direction from a first boundary with the inner portion of the center pixels to a second boundary with the outer pixels.
- the device may also include a WFoV correction engine configured to correct distortion within the original image.
- the WFoV correction engine may process raw image data of the original WFoV image.
- a rectilinear mapping of center pixels of the original WFoV image may be applied.
- a cylindrical mapping of outer pixels of the original WFoV image may also be applied. Modified center and outer pixels may be combined to generate a distortion-corrected WFoV image.
- the method includes acquiring the original WFoV image. Distortion is corrected within the original WFoV image by processing raw image data of the original WFoV image.
- a rectilinear projection is applied to center pixels of the original WFoV image and a cylindrical projection is applied to outer pixels of the original WFoV image. Modified center and outer pixels are combined to generate a distortion-corrected WFoV image.
- One or more cascades of regular object classifiers are applied to detect one or more objects located within the center or outer pixels, or both, of the distortion-corrected WFoV image upon application of the one or more cascades of regular object classifiers.
- the applying a rectilinear projection to center pixels may include applying a regular rectilinear projection to an inner portion of the center pixels and a squeezed rectilinear projection to an outer portion of the center pixels.
- the applying of a squeezed rectilinear projection to the outer portion of the center pixels may include applying an increasingly squeezed rectilinear projection in a direction from a first boundary with the inner portion of the center pixels to a second boundary with the outer pixels.
- a further method for acquiring wide field of view images with an image acquisition device having at least one lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°.
- the method includes acquiring the original WFoV image.
- One or more cascades of modified object classifiers are applied.
- a first subset of rectilinear classifiers is applied to objects appearing in center pixels of the WFoV image, and a second subset of cylindrical classifiers is applied to objects appearing in outer pixels of the WFoV image.
- One or more objects located within the center or outer pixels, or both, of the original WFoV image is/are detected by the applying of the modified object classifiers, including the applying of the first subset of rectilinear classifiers and the applying of the second subset of cylindrical classifiers, respectively.
- the applying of the first subset of rectilinear classifiers may include applying a subset of regular rectilinear classifiers with which objects appearing in an inner portion of the center pixels are detectable, and/or applying a subset of squeezed rectilinear classifiers with which objects appearing in an outer portion of the center pixels are detectable.
- the applying of the subset of squeezed rectilinear classifiers may include applying subsets of increasingly squeezed rectilinear classifiers with which objects appearing in the outer portion of the center pixels are increasingly detectable in a direction from a first boundary with the inner portion of the center pixels to a second boundary with the outer pixels.
- the method may include correcting distortion within the original image by processing raw image data of the original WFoV image including applying a rectilinear mapping of center pixels of the original WFoV image and a cylindrical mapping of outer pixels of the original WFoV image, and combining modified center and outer pixels to generate a distortion-corrected WFoV image.
- processor-readable media having embedded therein code for programming a processor to perform any of the methods described herein.
- the device includes at least one non-linear lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°.
- the non-linear lens is configured to project a center region of a scene onto the middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region.
- the device also includes an object detection engine including one or more cascades of regular object classifiers.
- a WFoV correction engine of the device is configured to correct distortion within the original WFoV image.
- the WFoV correction engine processes raw image data of the original WFoV image.
- a cylindrical projection of outer pixels of the original WFoV image is applied.
- Center pixels and modified outer pixels are combined to generate a distortion-corrected WFoV image.
- One or more objects located within the center or outer pixels, or both, of the distortion-corrected WFoV image are detectable by the object detection engine upon application of the one or more cascades of regular object classifiers.
- the device includes at least one non-linear lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°.
- the non-linear lens is configured to project a center region of a scene onto the middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region.
- An object detection engine includes one or more cascades of modified object classifiers including a subset of cylindrical classifiers to be applied to objects appearing in outer pixels of the WFoV image.
- One or more objects located within the center or outer pixels, or both, of the original WFoV image are detectable by the object detection engine upon application of the one or more cascades of modified object classifiers, including a subset of regular classifiers and the subset of cylindrical classifiers, respectively.
- the device may include a WFoV correction engine configured to correct distortion within the original image.
- the WFoV correction engine processes raw image data of the original WFoV image. A cylindrical mapping of outer pixels of the original WFoV image is performed. Center pixels and modified outer pixels are combined to generate a distortion-corrected WFoV image.
- Another method for acquiring wide field of view images with an image acquisition device having at least one lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°.
- the method includes acquiring the original WFoV image, including utilizing at least one non-linear lens to project a center region of a scene onto a middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region. Distortion is corrected within the original WFoV image by processing raw image data of the original WFoV image. A cylindrical projection of outer pixels of the original WFoV image is applied. Center pixels and modified outer pixels are combined to generate a distortion-corrected WFoV image.
- One or more objects are detected by applying one or more cascades of regular object classifiers to one or more objects located within the center or outer pixels, or both, of the distortion-corrected WFoV image.
- a further method for acquiring wide field of view images with an image acquisition device having at least one lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°.
- the method includes acquiring the original WFoV image, including utilizing at least one non-linear lens to project a center region of a scene onto a middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region.
- One or more modified object classifiers are applied.
- a subset of cylindrical classifiers is applied to objects appearing in outer pixels of the WFoV image, and a subset of regular classifiers is applied to objects appearing in center pixels of the WFoV image.
- One or more objects located within center or outer pixels, or both, of the original WFoV image are detected by the applying of the one or more cascades of modified object classifiers, including the applying of the subset of regular classifiers and the applying of the subset of cylindrical classifiers, respectively.
- the method may include correcting distortion within the original WFoV image by processing raw image data of the original WFoV image, including applying a cylindrical mapping of outer pixels of the original WFoV image, and combining center pixels and modified outer pixels to generate a distortion-corrected WFoV image.
- One or more processor-readable media having embedded therein code is/are provided for programming a processor to perform any of the methods described herein of processing wide field of view images acquired with an image acquisition device having an image sensor and at least one non-linear lens to project a center region of a scene onto a middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region to acquire an original wide field of view (WFoV) image with a field of view of more than 90°.
- WFoV wide field of view
- the device includes a lens assembly and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°.
- the lens assembly includes a compressed rectilinear lens to capture a center region of a scene onto a middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region.
- the device also includes a cylindrical lens on one or both sides of the compressed rectilinear lens to capture outer regions of the scene onto outer portions of the image sensor such as to directly provide a cylindrical mapping of the outer regions.
- An object detection engine of the device includes one or more cascades of regular object classifiers. One or more objects located within the center or outer pixels, or both, of the original WFoV image is/are detectable by the object detection engine upon application of the one or more cascades of regular object classifiers.
- the device includes a lens assembly and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°.
- the lens assembly includes a lens having a compressed rectilinear center portion to capture a center region of a scene onto a middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region.
- the lens also includes cylindrical outer portions on either side of the compressed rectilinear portion to capture outer regions of the scene onto outer portions of the image sensor such as to directly provide a cylindrical mapping of the outer regions.
- An object detection engine of the device includes one or more cascades of regular object classifiers. One or more objects located within the center or outer pixels, or both, of the original WFoV image is/are detectable by the object detection engine upon application of the one or more cascades of regular object classifiers.
- the device includes multiple cameras configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°.
- the original wide field of view image includes a combination of multiple images captured each with one of the multiple cameras.
- the multiple cameras include a first camera having a first image sensor and a compressed rectilinear lens to capture a center region of a scene onto the first sensor such as to directly provide a rectilinear mapping of the center region, and a second camera having a second image sensor and a first cylindrical lens on a first side of the compressed rectilinear lens to capture a first outer region of the scene onto the second image sensor such as to directly provide a cylindrical mapping of the first outer region, and a third camera having a third image sensor and a second cylindrical lens on a second side of the compressed rectilinear lens to capture a second outer region of the scene onto the third image sensor such as to directly provide a cylindrical mapping of the second outer region.
- An object detection engine of the device includes one or more cascades of regular object classifiers.
- One or more objects located within the original wide field of view image appearing on the multiple cameras of the original WFoV image is/are detectable by the object detection engine upon application of the one or more cascades of regular object classifiers.
- the device includes multiple cameras configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°.
- the original wide field of view image includes a combination of multiple images captured each with one of the multiple cameras.
- the multiple cameras each utilize a same lens and include a first camera having a first image sensor utilizing a compressed rectilinear portion of the lens to capture a center region of a scene onto the first sensor such as to directly provide a rectilinear mapping of the center region, and a second camera having a second image sensor utilizing a first cylindrical portion of the lens on a first side of the compressed rectilinear portion to capture a first outer region of the scene onto the second image sensor such as to directly provide a cylindrical mapping of the first outer region, and a third camera having a third image sensor utilizing a second cylindrical portion of the lens on a second side of the compressed rectilinear portion to capture a second outer region of the scene onto the third image sensor such as to directly provide a cylindrical mapping of the second outer region.
- An object detection engine of the device includes one or more cascades of regular object classifiers.
- One or more objects located within the original wide field of view image appearing on the multiple cameras of the original WFoV image is/are detectable by the object detection engine upon application of the one or more cascades of regular object classifiers.
- any of the devices described herein may include a full frame buffer coupled with the image sensor for acquiring raw image data, a mixer, and a zoom and pan engine, and/or an object tracking engine, just as any of the methods described herein may include tracking one or more detected objects over multiple sequential frames.
- Any of the object classifiers described herein may include face classifiers or classifiers of other specific objects.
- Any of the regular object classifiers described herein may include rectangular object classifiers.
- Exemplary face region images distorted in a manner like the building frontages of FIGS. 1B-1G might have rectilinear distortion similar to FIG. 1C at the edges, and as in FIG. 1B cylindrical projection.
- the system shown in FIG. 2 includes a wide field of view (WFOV) lens of for example 120 degrees; a sensor, for example of 3 megapixels or more; a full frame buffer (e.g., from Bayer); a WFOV correction module; a face detector and face tracker; a zoom and pan engine; a mixer and a control module.
- WFOV wide field of view
- the WFOV system illustrated at FIG. 1 incorporates lens assembly and corresponding image sensor which is typically more elongated than a conventional image sensor.
- the system further incorporates a face tracking module which employs one or more cascades of rectangular face classifiers.
- face classifiers may be altered according to the location of the face regions within an unprocessed (raw) image of the scene.
- the center region of the image representing up to 100′ of the horizontal field of view (FOV) is mapped using a squeezed rectilinear projection.
- this may be obtained using a suitable non-linear lens design to directly project the center region of the scene onto the middle 2 ⁇ 3 of the image sensor.
- the remaining approximately 1 ⁇ 3 portion of the image sensor i.e. 1 ⁇ 6 at each end of the sensor
- the edges of the wide-angle lens are designed to optically effect said projection directly onto the imaging sensor.
- the entire horizontal scene is mapped onto the full extent of the image sensor, as illustrated at FIG. 3( a ).
- some of the scene mappings are achieved optically, but some additional image post-processing is used to refine the initial projections of the image scene onto the sensor.
- the lens design can be optimized for manufacturing considerations, a larger portion of the sensor area can be used to capture useful scene data and the software post-processing overhead is similar to the pure software embodiment.
- multiple cameras are configured to cover overlapping portions of the desired field of view and the acquired images are combined into a single WFOV image in memory.
- this plurality of cameras are configured to have the same optical center, thus mitigating perspective related problems for foreground objects.
- techniques employed in panorama imaging may be used advantageously to join images at their boundaries, or to determine the optimal join line where a significant region of image overlap is available.
- Ser. Nos. 12/636,608, 12/636,618, 12/636,629, 12/636,639, and 12/636,647 as are US published apps nos. U.S. patent application US20060182437, US20090022422, US20090021576 and US20060268130.
- multi-camera WFOV device three, or more standard cameras with a 60 degree FOV are combined to provide an overall horizontal WFOV of 120-150 degrees with an overlap of 15-30 degrees between cameras.
- the field of view for such a cameras can be extended horizontally by adding more cameras; it may be extended vertically by adding an identical array of 3 or more horizontally aligned cameras facing in a higher (or lower) vertical direction and with a similar vertical overlap of 15-30 degrees offering a vertical FOV of 90-105 degrees for two such WFOV arrays.
- the vertical FOV may be increased by adding further horizontally aligned cameras arrays.
- WLC wafer-level cameras
- a central WFOV cameras has its range extended by two side-cameras.
- the WFOV cameras can employ an optical lens optimized to provide a 120 degree compressed rectilinear mapping of the central scene.
- the side cameras can be optimized to provide a cylindrical mapping of the peripheral regions of the scene, thus providing a similar result to that obtained in FIG. 3( a ), but using three independent cameras with independent optical systems rather than a single sensor/ISP as shown in FIG. 3( b ).
- techniques employed in panorama imaging to join overlapping images can be advantageously used (see the Panorama cases referred to above herein).
- FIG. 3( a ) illustrates one embodiment where this can be achieved using a compressed rectilinear lens in the middle, surrounded by two cylindrical lenses on either side.
- all three lenses could be combined into a single lens structure designed to minimize distortions where the rectilinear projection of the original scene overlaps with the cylindrical projection.
- a standard face-tracker can now be applied to the WFOV image as all face regions should be rendered in a relatively undistorted geometry.
- the entire scene need not be re-mapped, but instead only the luminance components are re-mapped and used to generate a geometrically undistorted integral image. Face classifiers are then applied to this integral image in order to detect faces. Once faces are detected those faces and their surrounding peripheral regions can be re-mapped on each frame, whereas it may be sufficient to re-map the entire scene background, which is assumed to be static, only occasionally, say every 60-120 image frames. In this way image processing and enhancement can be focussed on the people in the image scene.
- the remapping of the image scene, or portions thereof involves the removal of purple fringes (due to blue shift) or the correction of chromatic aberrations.
- purple fringes due to blue shift
- chromatic aberration correction US20090189997.
- a single mapping of the input image scene is used. If, for example, only a simple rectilinear mapping were applied across the entire image scene the edges of the image would be distorted as in FIG. 1C and only across the middle 40% or so of the image can a conventional face tracker be used. Accordingly the rectangular classifiers of the face tracker are modified to take account of the scene mappings across the other 60% of image scene regions: Over the middle portion of the image they can be applied unaltered; over the second 30% they are selectively expanded or compressed in the horizontal direction to account for the degree of squeezing of the scene during the rectilinear mapping process. Finally, in the outer 1 ⁇ 3 the face classifiers are adapted to account for the cylindrical mapping used in this region of the image scene.
- Having greater granularity for the classifiers is advantageous particularly when starting to rescale features inside the classifier individually, based on the distance to the optical center.
- an initial, shortened chain of modified classifiers is applied to the raw image (i.e. without any rectilinear or cylindrical re-mapping).
- This chain is composed of some of the initial face classifiers from a normal face detection chain.
- These initial classifiers are also, typically, the most aggressive to eliminate non-faces from consideration. These also tend to be simpler in form and the first four Haar classifiers from the Viola-Jones cascade are illustrated in FIG. 4 (these may be implemented through a 22 ⁇ 22 pixel window in another embodiment).
- This short classifier chain is employed to obtain a set of potential face regions which may then be re-mapped (using, for example, compressed rectilinear compression and/or cylindrical mapping) to enable the remainder of a complete face detection classifier chain to be applied to each potential face region.
- This embodiment relies on the fact that 99.99% of non-face regions are eliminated by applying the first few face classifiers; thus a small number of potential face regions would be re-mapped rather than the entire image scene before applying a full face detection process.
- distortion may be compensated by a method that involves applying geometrical adjustments (function of distance to optical center) when an integral image is computed (in the cases where the template matching is done using II) or compensate for the distortion when computing the sub-sampled image used for face detection and face tracking (in the cases where template matching is done directly on Y data).
- face classifiers can be divided into symmetric and non-symmetric classifiers.
- split classifier chains For example right and left-hand face detector cascades may report detection of a half-face region—this may indicate that a full face is present but the second half is more or less distorted than would be expected, perhaps because it is closer to or farther from the lens than is normal. In such cases a more relaxed half, or full-face detector may be employed to confirm if a full face is actually present or a lower acceptance threshold may be set for the current detector.
- a face when a face is tracked across the scene it may be desired to draw particular attention to that face and to emphasize it against the main scene.
- suitable for applications in videotelephony there may be one or more faces in the main scene but one (or more) of these is speaking. It is possible, using a stereo microphone to localize the speaking face.
- This face regions, and the other foreground regions are further processed to magnify them (e.g., in one embodiment by a factor of x1.8 times) against the background; in a simple embodiment this magnified face is simply composited onto the background image in the same location as the unmagnified original
- the other faces and the main background of the image are de-magnified and/or squeezed in order to keep the overall image size self-consistent. This may lead to some image distortion, particularly surrounding the “magnified” face, but this helps to emphasize the person speaking as illustrated in FIGS. 4( a )- 4 ( c ). In this case the degree of magnification is generally ⁇ x1.5 to avoid excessive distortion across the remainder of the image.
- the degree of magnification can be varied according to the strength or loudness of a speaker's voice, as illustrated at FIGS. 5( a )- 5 ( c ).
- the rendering of the face region and surrounding portions of the image can be adjusted to emphasize one or more persons appearing in the final, re-mapped image of the captured scene.
- a stereo microphone system triangulates the location of the person speaking and a portion of the scene is zoomed by a factor greater than one. The remaining portions of the image are zoomed by a factor less than one, so that the overall image is of approximately the same dimension. Thus persons appearing in the image appear larger when they are talking and it is easier for viewers to focus on the current speaker from a group.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Optics & Photonics (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Geometry (AREA)
Abstract
Description
- This application claims the benefit of priority under 35 USC §119 to U.S. provisional patent application No. 61/311,264, filed Mar. 5, 2010. This application is one of a series of contemporaneously-filed patent applications including United States patent application (Atty. Docket FN-353A-US, FN-353B-US, and FN-353C-US), each of which are incorporated by reference.
- Face detection methods have become very well established within digital cameras in recent years. This technology brings a range of benefits including enhanced acquisition of the main image and adaptation of the acquisition process to optimized image appearance and quality based on the detected faces.
- More recently, newer consumer cameras have begun to feature wide field of view (WFOV) imaging systems and as the benefits of obtaining a wider scene become apparent to consumers, it is expected that further growth will ensue in such imaging systems along with an ability to achieve even wider fields of view over time. In professional cameras, such WFOV imaging systems are better known, the most well known being the fish-eye lens. WFOV imaging systems are also used in a range of applications including Google's “street-view” technology and for some video-phone systems where they enable a number of people sitting at a table to be imaged by a single sensor and optical system.
- Now mapping a WFOV image onto a rectilinear image sensor is non-trivial and a wide range of different techniques are available depending on the exact form of the WFOV lens and associated optical elements. The desired image perspective is also important.
- Unfortunately due to the complexity of WFOV imaging systems the benefits of face detection technologies have not been successfully applied to such systems. In particular, faces near the center of a WFOV camera appear closer to the camera and experience some geometrical distortions. Faces about mid-way from the center appear at approximately the correct distances from the camera and experience less significant distortions. Faces towards the edge experience very significant geometrical distortions. The exact nature of each of these types of perspective and geometrical distortion depend on the nature of the lens and optical system.
- Clearly a conventional face detection or face tracking system employing rectangular classifiers or integral image techniques cannot be conveniently applied directly to such faces. Accordingly methods are desired to adapt and compensate for image distortions within such WFOV imaging systems so that face detection technologies can be successfully employed in devices like digital cameras and video phone systems.
- The following is from http://www.panorama-numerique.com/squeeze/squeeze.htm, where it is referred to as “Correcting wider than 90° rectilinear images to print or to display architecture panoramas,” by Georges Lagarde. The indicated point is to remove stretching near the sides of a wide angle shot. Mr. Lagarde indicates that one simply has to “just squeeze your panos!” However, in practice, there are greater complexities that than. This application provides several embodiments after this introduction for displaying panoramas without all the inherent distortion.
- Mr. Lagarde points out that 180° panoramic images require large screen real-estate. Reduced to a more usual size, Mr. Lagarde presents the examples illustrated at
FIGS. 1A-1G . While panoramic images are typically difficult to appraise, displaying them in a narrow window has generally been avoided, and instead a 1280×1024 screen or “larger” and a fast Internet connection may be typically recommended. - Mr. Lagarde points out that the exact same source images of
FIGS. 1A-1G (showing the Préfecture building in Grenoble, France) were used in a previous tutorial: Rectilinear/cylindric/equirectangular selection made easy, and that different but acceptable panoramic images can result from stitching the same source images and then using different projection modes is implied here and there. -
FIG. 1A illustrates Piazza Navona, Roma by Gaspar Van Wittel, 1652-1736 (Museo Thyssen-Bornemisza, Madrid). - Mr. Lagarde indicates that most photographers restrict themselves to subjects which can be photographed with a rectilinear lens (plane projection). A small number of them sometimes use a fisheye lens (spherical projection) or a rotating lens camera (cylindrical projection) or a computer (stitcher programs make use of various projection modes), but when the field of view (horizontal FOV and/or vertical FOV) is higher than 90 degrees (or about, this actually depends on the subject) they are disturbed by the “excessive wide-angle distortion” found in the resulting images.
- Adapting the usual projection modes to the subject and/or using multiple local projections to avoid this distortion is a violation of the classical perspective rules, but escaping classical perspective rules is exactly what sketchers and painters always did to avoid unpleasant images. Mr. Lagarde points out that this was explained by Anton Maria Zanetti and Antonio Conti using the words of their times (“Il Professore m'entendara”) when they described how the camera ottica was used by the seventeenth century Venetian masters. Because the field of view of the lenses available then was much lower than 90°, that a camera oscura was not able to display the very wide vedute they sketched and painted is evident: the solution was to record several images and to stitch them onto the canvas to get a single view (strangely enough, that the field of view is limited to about 90 degrees when one uses classical perspective—aka rectilinear projection on a vertical plane—is not handled in most perspective treatises.)
- Equivalent “tricks” can be used for photographic images:
-
- Use of several projection planes—their number and location depending of the subject—for a single resulting image. This is the method explained by L. Zelnik-Manor in Squaring the Circle in Panoramas (see references.)
- Use of several projection modes—the selected modes depending of the subject—for a single resulting image. This is the method proposed by Buho (Eric S.) and used by Johnh (John Houghton) in Hybrid Rectilinear & Cylindrical projections (see references.)
- Use of an “altered rectilinear” projection (thus no more rectilinear) where the modification is a varying horizontal compression, null in the center, high near the sides). This is the method proposed by Olivier_G (Olivier Gallen) in Panoramas: la perspective classique ne s'applique plus! (see references.)
- Use of “squeezed rectilinear” projection (neither an actual rectilinear one) where the modification is a varying horizontal and vertical compression, null near the horizon (shown as a red line in the examples), null near a vertical line which goes through to the main vanishing point (shown as a blue line in the examples), increasing like tangent (angle) toward the sides (where angle correspond to the angular distance between the point and the line.)
- If photographers like the results, no doubt they will use that.
- In a first example, referring now to
FIG. 1B , an image is shown that is a 180° panorama where cylindrical projection mode is used to show a long building viewed from a short distance. Most people dislike images like this one, where except for the horizon, every straight horizontal line is heavily curved. - The next image shown in
FIG. 1C illustrates an attempt to use the rectilinear projection mode: every straight line in the buildings is rendered as a straight line. But, while rectilinear projection works well when field of view is lower than 90 degrees, it should never be used when field of view is larger than 120 degrees. In this image, though the field of view was restricted to 155 degree (original panorama corresponds to 180°), the stretching is too high in the left and right parts and the result utterly unacceptable. - Referring to
FIG. 1D , because digital images can be squeezed at will, rather than discarding this previous rectilinear image, one can correct the excessive stretching. The result is no more rectilinear (diagonal lines are somewhat distorted) but a much wider part of the buildings now have an acceptable look. The variable amount of squeezing I used is shown by the dotted line near the top side: the more close the dots are, the more compressed was the corresponding part of the rectilinear original. - Referring to
FIG. 1E , the rendering of the main building is much better. Note that this view looks like it were taken from a more distant point of view than in the cylindrical image: this is not true, the same source images were used for both panoramas. - Referring to
FIG. 1F , the left most and right most parts of the squeezed image are improved, but they are still not very pleasant. Here is a possible solution, where I used the edge parts of the cylindrical version in a second layer: - And finally, referring to
FIG. 1G : This view can be compared with the example ofFIG. 1B on the top of this page: each one shows exactly the same buildings and cars, and each comes from exactly the same source images. - The pictured buildings in
FIGS. 1B-1G are located on the sides of a large square but, because there are many large trees on this square, standing back enough for a large field of view is not possible. The image shown inFIG. 1B illustrates photos that were actually taken at a rather short distance from the main building, whileFIG. 1G suggests the viewer being much more distant from this building. -
FIGS. 1A-1G illustrate various conventional attempts to avoid distortion in images with greater than 90° field of view. -
FIG. 2 schematically illustrates a wide field of view (WFOV) system that in one embodiment incorporates a face tracker. -
FIG. 3( a) illustrates a wide horizontal scene mapped onto a full extent of an image sensor. -
FIG. 3( b) illustrates a wide horizontal scene not mapped onto a full extent of an image sensor, and instead a significant portion of the sensor is not used. -
FIG. 4 illustrates the first four Haar classifiers used in face detection. -
FIGS. 4( a)-4(c) illustrate magnification of a person speaking among a group of persons within a WDOF image. -
FIGS. 5( a)-5(c) illustrate varying the magnification of a person speaking among a group of persons within a WDOF image, wherein the degree of magnification may vary depending on the strength or loudness of the speaker's voice. - An image acquisition device having a wide field of view is provided. The device includes at least one lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The device also includes a control module and an object detection engine that includes one or more cascades of regular object classifiers. A WFoV correction engine of the device is configured to correct distortion within the original image. The WFoV correction engine processes raw image data of the original WFoV image. A rectilinear projection of center pixels of the original WFoV image is applied. A cylindrical projection of outer pixels of the original WFoV image is also applied. Modified center and outer pixels are combined to generate a distortion-corrected WFoV image. One or more objects located within the center or outer pixels, or both, of the distortion-corrected WFoV image are detectable by the object detection engine upon application of the one or more cascades of regular object classifiers.
- The applying of the rectilinear projection to center pixels may also include applying a regular rectilinear projection to an inner portion of the center pixels and a squeezed rectilinear projection to an outer portion of the center pixels. The applying of the squeezed rectilinear projection to the outer portion of the center pixels may also include applying an increasingly squeezed rectilinear projection in a direction from a first boundary with the inner portion of the center pixels to a second boundary with the outer pixels.
- Another image acquisition device having a wide field of view is provided. The device includes at least one lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°, a control module, and an object detection engine that includes one or more cascades of modified object classifiers. The modified object classifiers include a first subset of rectilinear classifiers to be applied to objects appearing in center pixels of the WFoV image, and a second subset of cylindrical classifiers to be applied to objects appearing in outer pixels of the WFoV image. One or more objects located within the center or outer pixels, or both, of the original WFoV image are detectable by the object detection engine upon application of the one or more cascades of modified object classifiers, including the first subset of rectilinear classifiers and the second subset of cylindrical classifiers, respectively.
- The first subset of rectilinear classifiers may include a subset of regular rectilinear classifiers with which objects appearing in an inner portion of the center pixels are detectable, and a subset of squeezed rectilinear classifiers with which objects appearing in an outer portion of the center pixels are detectable. The subset of squeezed rectilinear classifiers may include subsets of increasingly squeezed rectilinear classifiers with which objects appearing in the outer portion of the center pixels are increasingly detectable in a direction from a first boundary with the inner portion of the center pixels to a second boundary with the outer pixels.
- The device may also include a WFoV correction engine configured to correct distortion within the original image. The WFoV correction engine may process raw image data of the original WFoV image. A rectilinear mapping of center pixels of the original WFoV image may be applied. A cylindrical mapping of outer pixels of the original WFoV image may also be applied. Modified center and outer pixels may be combined to generate a distortion-corrected WFoV image.
- A method is provided for acquiring wide field of view images with an image acquisition device having at least one lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The method includes acquiring the original WFoV image. Distortion is corrected within the original WFoV image by processing raw image data of the original WFoV image. A rectilinear projection is applied to center pixels of the original WFoV image and a cylindrical projection is applied to outer pixels of the original WFoV image. Modified center and outer pixels are combined to generate a distortion-corrected WFoV image. One or more cascades of regular object classifiers are applied to detect one or more objects located within the center or outer pixels, or both, of the distortion-corrected WFoV image upon application of the one or more cascades of regular object classifiers.
- The applying a rectilinear projection to center pixels may include applying a regular rectilinear projection to an inner portion of the center pixels and a squeezed rectilinear projection to an outer portion of the center pixels. The applying of a squeezed rectilinear projection to the outer portion of the center pixels may include applying an increasingly squeezed rectilinear projection in a direction from a first boundary with the inner portion of the center pixels to a second boundary with the outer pixels.
- A further method is provided for acquiring wide field of view images with an image acquisition device having at least one lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The method includes acquiring the original WFoV image. One or more cascades of modified object classifiers are applied. A first subset of rectilinear classifiers is applied to objects appearing in center pixels of the WFoV image, and a second subset of cylindrical classifiers is applied to objects appearing in outer pixels of the WFoV image. One or more objects located within the center or outer pixels, or both, of the original WFoV image is/are detected by the applying of the modified object classifiers, including the applying of the first subset of rectilinear classifiers and the applying of the second subset of cylindrical classifiers, respectively.
- The applying of the first subset of rectilinear classifiers may include applying a subset of regular rectilinear classifiers with which objects appearing in an inner portion of the center pixels are detectable, and/or applying a subset of squeezed rectilinear classifiers with which objects appearing in an outer portion of the center pixels are detectable. The applying of the subset of squeezed rectilinear classifiers may include applying subsets of increasingly squeezed rectilinear classifiers with which objects appearing in the outer portion of the center pixels are increasingly detectable in a direction from a first boundary with the inner portion of the center pixels to a second boundary with the outer pixels.
- The method may include correcting distortion within the original image by processing raw image data of the original WFoV image including applying a rectilinear mapping of center pixels of the original WFoV image and a cylindrical mapping of outer pixels of the original WFoV image, and combining modified center and outer pixels to generate a distortion-corrected WFoV image.
- One or more processor-readable media having embedded therein code for programming a processor to perform any of the methods described herein.
- Another image acquisition device having a wide field of view is provided. The device includes at least one non-linear lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The non-linear lens is configured to project a center region of a scene onto the middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region. The device also includes an object detection engine including one or more cascades of regular object classifiers. A WFoV correction engine of the device is configured to correct distortion within the original WFoV image. The WFoV correction engine processes raw image data of the original WFoV image. A cylindrical projection of outer pixels of the original WFoV image is applied. Center pixels and modified outer pixels are combined to generate a distortion-corrected WFoV image. One or more objects located within the center or outer pixels, or both, of the distortion-corrected WFoV image are detectable by the object detection engine upon application of the one or more cascades of regular object classifiers.
- Another image acquisition device having a wide field of view is provided. The device includes at least one non-linear lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The non-linear lens is configured to project a center region of a scene onto the middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region. An object detection engine includes one or more cascades of modified object classifiers including a subset of cylindrical classifiers to be applied to objects appearing in outer pixels of the WFoV image. One or more objects located within the center or outer pixels, or both, of the original WFoV image are detectable by the object detection engine upon application of the one or more cascades of modified object classifiers, including a subset of regular classifiers and the subset of cylindrical classifiers, respectively.
- The device may include a WFoV correction engine configured to correct distortion within the original image. The WFoV correction engine processes raw image data of the original WFoV image. A cylindrical mapping of outer pixels of the original WFoV image is performed. Center pixels and modified outer pixels are combined to generate a distortion-corrected WFoV image.
- Another method is provided for acquiring wide field of view images with an image acquisition device having at least one lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The method includes acquiring the original WFoV image, including utilizing at least one non-linear lens to project a center region of a scene onto a middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region. Distortion is corrected within the original WFoV image by processing raw image data of the original WFoV image. A cylindrical projection of outer pixels of the original WFoV image is applied. Center pixels and modified outer pixels are combined to generate a distortion-corrected WFoV image. One or more objects are detected by applying one or more cascades of regular object classifiers to one or more objects located within the center or outer pixels, or both, of the distortion-corrected WFoV image.
- A further method is provided for acquiring wide field of view images with an image acquisition device having at least one lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The method includes acquiring the original WFoV image, including utilizing at least one non-linear lens to project a center region of a scene onto a middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region. One or more modified object classifiers are applied. A subset of cylindrical classifiers is applied to objects appearing in outer pixels of the WFoV image, and a subset of regular classifiers is applied to objects appearing in center pixels of the WFoV image. One or more objects located within center or outer pixels, or both, of the original WFoV image are detected by the applying of the one or more cascades of modified object classifiers, including the applying of the subset of regular classifiers and the applying of the subset of cylindrical classifiers, respectively.
- The method may include correcting distortion within the original WFoV image by processing raw image data of the original WFoV image, including applying a cylindrical mapping of outer pixels of the original WFoV image, and combining center pixels and modified outer pixels to generate a distortion-corrected WFoV image.
- One or more processor-readable media having embedded therein code is/are provided for programming a processor to perform any of the methods described herein of processing wide field of view images acquired with an image acquisition device having an image sensor and at least one non-linear lens to project a center region of a scene onto a middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region to acquire an original wide field of view (WFoV) image with a field of view of more than 90°.
- Another image acquisition device having a wide field of view is provided. The device includes a lens assembly and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The lens assembly includes a compressed rectilinear lens to capture a center region of a scene onto a middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region. The device also includes a cylindrical lens on one or both sides of the compressed rectilinear lens to capture outer regions of the scene onto outer portions of the image sensor such as to directly provide a cylindrical mapping of the outer regions. An object detection engine of the device includes one or more cascades of regular object classifiers. One or more objects located within the center or outer pixels, or both, of the original WFoV image is/are detectable by the object detection engine upon application of the one or more cascades of regular object classifiers.
- Another image acquisition device having a wide field of view is provided. The device includes a lens assembly and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The lens assembly includes a lens having a compressed rectilinear center portion to capture a center region of a scene onto a middle portion of the image sensor such as to directly provide a rectilinear mapping of the center region. The lens also includes cylindrical outer portions on either side of the compressed rectilinear portion to capture outer regions of the scene onto outer portions of the image sensor such as to directly provide a cylindrical mapping of the outer regions. An object detection engine of the device includes one or more cascades of regular object classifiers. One or more objects located within the center or outer pixels, or both, of the original WFoV image is/are detectable by the object detection engine upon application of the one or more cascades of regular object classifiers.
- Another image acquisition device having a wide field of view is provided. The device includes multiple cameras configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The original wide field of view image includes a combination of multiple images captured each with one of the multiple cameras. The multiple cameras include a first camera having a first image sensor and a compressed rectilinear lens to capture a center region of a scene onto the first sensor such as to directly provide a rectilinear mapping of the center region, and a second camera having a second image sensor and a first cylindrical lens on a first side of the compressed rectilinear lens to capture a first outer region of the scene onto the second image sensor such as to directly provide a cylindrical mapping of the first outer region, and a third camera having a third image sensor and a second cylindrical lens on a second side of the compressed rectilinear lens to capture a second outer region of the scene onto the third image sensor such as to directly provide a cylindrical mapping of the second outer region. An object detection engine of the device includes one or more cascades of regular object classifiers. One or more objects located within the original wide field of view image appearing on the multiple cameras of the original WFoV image is/are detectable by the object detection engine upon application of the one or more cascades of regular object classifiers.
- Another image acquisition device having a wide field of view is provided. The device includes multiple cameras configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The original wide field of view image includes a combination of multiple images captured each with one of the multiple cameras. The multiple cameras each utilize a same lens and include a first camera having a first image sensor utilizing a compressed rectilinear portion of the lens to capture a center region of a scene onto the first sensor such as to directly provide a rectilinear mapping of the center region, and a second camera having a second image sensor utilizing a first cylindrical portion of the lens on a first side of the compressed rectilinear portion to capture a first outer region of the scene onto the second image sensor such as to directly provide a cylindrical mapping of the first outer region, and a third camera having a third image sensor utilizing a second cylindrical portion of the lens on a second side of the compressed rectilinear portion to capture a second outer region of the scene onto the third image sensor such as to directly provide a cylindrical mapping of the second outer region.
- An object detection engine of the device includes one or more cascades of regular object classifiers. One or more objects located within the original wide field of view image appearing on the multiple cameras of the original WFoV image is/are detectable by the object detection engine upon application of the one or more cascades of regular object classifiers.
- Any of the devices described herein may include a full frame buffer coupled with the image sensor for acquiring raw image data, a mixer, and a zoom and pan engine, and/or an object tracking engine, just as any of the methods described herein may include tracking one or more detected objects over multiple sequential frames. Any of the object classifiers described herein may include face classifiers or classifiers of other specific objects. Any of the regular object classifiers described herein may include rectangular object classifiers.
- Exemplary face region images distorted in a manner like the building frontages of
FIGS. 1B-1G , i.e. distorted by a WFOV, might have rectilinear distortion similar toFIG. 1C at the edges, and as inFIG. 1B cylindrical projection. - The system shown in
FIG. 2 includes a wide field of view (WFOV) lens of for example 120 degrees; a sensor, for example of 3 megapixels or more; a full frame buffer (e.g., from Bayer); a WFOV correction module; a face detector and face tracker; a zoom and pan engine; a mixer and a control module. The WFOV system illustrated atFIG. 1 incorporates lens assembly and corresponding image sensor which is typically more elongated than a conventional image sensor. The system further incorporates a face tracking module which employs one or more cascades of rectangular face classifiers. - As the system is configured to image a horizontal field of >90-100 degrees or more, it is desired to process the scene captured by the system to present an apparently “normal” perspective on the scene. There are several approaches to this as exemplified by the example drawn from the architectural perspective of a long building described in Appendix A. In the context of our WFOV camera this disclosure is primarily directed at considering how facial regions will be distorted by the WFOV perspective of this camera. One can consider such facial regions to suffer similar distortions to the frontage of the building illustrated in this attached Appendix. Thus the problem to obtain geometrically consistent face regions across the entire horizontal range of the WFOV camera is substantially similar to the architectural problem described therein.
- Thus, in order to obtain reasonable face regions, it is useful to alter/map the raw image obtained from the original WFOV horizontal scene so that faces appear undistorted. Or in alternative embodiments face classifiers may be altered according to the location of the face regions within an unprocessed (raw) image of the scene.
- In a first preferred embodiment the center region of the image representing up to 100′ of the horizontal field of view (FOV) is mapped using a squeezed rectilinear projection. In a first embodiment this may be obtained using a suitable non-linear lens design to directly project the center region of the scene onto the middle ⅔ of the image sensor. The remaining approximately ⅓ portion of the image sensor (i.e. ⅙ at each end of the sensor) has the horizontal scene projected using a cylindrical mapping. Again in a first preferred embodiment the edges of the wide-angle lens are designed to optically effect said projection directly onto the imaging sensor.
- Thus, in a first embodiment, the entire horizontal scene is mapped onto the full extent of the image sensor, as illustrated at
FIG. 3( a). - Naturally the form and structure of such a complex hybrid optical lens may not be conducive to mass production thus in an alternative embodiment a more conventional rectilinear wide-angle lens is used and the squeezing of the middle ⅔ of the image is achieved by post-processing the sensor data. Similarly the cylindrical projections of the outer regions of the WFOV scene are performed by post processing. In this second embodiment the initial projection of the scene onto the sensor does not cover the full extent of the sensor and thus a significant portion of the sensor area does not contain useful data. The overall resolution of this second embodiment is reduced and a larger sensor would be used to achieve similar accuracy to the first embodiment, as illustrated at
FIG. 3( b). - In a third embodiment some of the scene mappings are achieved optically, but some additional image post-processing is used to refine the initial projections of the image scene onto the sensor. In this embodiment the lens design can be optimized for manufacturing considerations, a larger portion of the sensor area can be used to capture useful scene data and the software post-processing overhead is similar to the pure software embodiment.
- In a fourth embodiment multiple cameras are configured to cover overlapping portions of the desired field of view and the acquired images are combined into a single WFOV image in memory. Preferably, this plurality of cameras are configured to have the same optical center, thus mitigating perspective related problems for foreground objects. In such an embodiment techniques employed in panorama imaging may be used advantageously to join images at their boundaries, or to determine the optimal join line where a significant region of image overlap is available. The following cases assigned to the same assignee relate to panorama imaging and are incorporated by reference: Ser. Nos. 12/636,608, 12/636,618, 12/636,629, 12/636,639, and 12/636,647, as are US published apps nos. U.S. patent application US20060182437, US20090022422, US20090021576 and US20060268130.
- In one preferred embodiment of the multi-camera WFOV device three, or more standard cameras with a 60 degree FOV are combined to provide an overall horizontal WFOV of 120-150 degrees with an overlap of 15-30 degrees between cameras. The field of view for such a cameras can be extended horizontally by adding more cameras; it may be extended vertically by adding an identical array of 3 or more horizontally aligned cameras facing in a higher (or lower) vertical direction and with a similar vertical overlap of 15-30 degrees offering a vertical FOV of 90-105 degrees for two such WFOV arrays. The vertical FOV may be increased by adding further horizontally aligned cameras arrays. Such configurations have the advantage that all individual cameras can be conventional wafer-level cameras (WLC) which can be mass-produced.
- In an alternative multi-cameras embodiment a central WFOV cameras has its range extended by two side-cameras. The WFOV cameras can employ an optical lens optimized to provide a 120 degree compressed rectilinear mapping of the central scene. The side cameras can be optimized to provide a cylindrical mapping of the peripheral regions of the scene, thus providing a similar result to that obtained in
FIG. 3( a), but using three independent cameras with independent optical systems rather than a single sensor/ISP as shown inFIG. 3( b). Again techniques employed in panorama imaging to join overlapping images can be advantageously used (see the Panorama cases referred to above herein). - After image acquisition and, depending on the embodiment, additional post-processing of the image, we arrive at a mapping of the image scene with three main regions. Over the middle third of the image there is a normal rectilinear mapping and the image is undistorted compared to a standard FOV image; over the next ⅓ of the image (i.e. ⅙ of image on either side) the rectilinear projection becomes increasingly squeezed as illustrated in
FIGS. 1A-1G ; finally, over the outer approximately ⅓ of the image a cylindrical projection, rather than rectilinear is applied. -
FIG. 3( a) illustrates one embodiment where this can be achieved using a compressed rectilinear lens in the middle, surrounded by two cylindrical lenses on either side. In a practical embodiment all three lenses could be combined into a single lens structure designed to minimize distortions where the rectilinear projection of the original scene overlaps with the cylindrical projection. - A standard face-tracker can now be applied to the WFOV image as all face regions should be rendered in a relatively undistorted geometry.
- In alternative embodiments the entire scene need not be re-mapped, but instead only the luminance components are re-mapped and used to generate a geometrically undistorted integral image. Face classifiers are then applied to this integral image in order to detect faces. Once faces are detected those faces and their surrounding peripheral regions can be re-mapped on each frame, whereas it may be sufficient to re-map the entire scene background, which is assumed to be static, only occasionally, say every 60-120 image frames. In this way image processing and enhancement can be focussed on the people in the image scene.
- In alternative embodiments it may not be desirable to completely re-map the entire WFOV scene due to the computational burden involved. In such embodiment, referring to U.S. Pat. Nos. 7,460,695, 7,403,643, 7,565,030, and 7,315,631 and US published app no. 2009-0263022, which are incorporated by reference along with US20090179998, US20090080713, US 2009-0303342 and U.S. Ser. No. 12/572,930, filed Oct. 2, 2009 by the same assignee. These references describe predicting face regions (determined from the previous several video frames). The images may be transformed using either cylindrical or squeezed rectilinear projection prior to applying a face tracker to the region. In such an embodiment, it may be involved from time to time to re-map a WFOV in order to make an initial determination of new faces within the WFOV image scene. However, after such initial determination only the region immediately surrounding each detected face need be re-mapped.
- In certain embodiments, the remapping of the image scene, or portions thereof, involves the removal of purple fringes (due to blue shift) or the correction of chromatic aberrations. The following case is assigned to the same assignee is incorporated by reference and relates to purple fringing and chromatic aberration correction: US20090189997.
- In other embodiments a single mapping of the input image scene is used. If, for example, only a simple rectilinear mapping were applied across the entire image scene the edges of the image would be distorted as in
FIG. 1C and only across the middle 40% or so of the image can a conventional face tracker be used. Accordingly the rectangular classifiers of the face tracker are modified to take account of the scene mappings across the other 60% of image scene regions: Over the middle portion of the image they can be applied unaltered; over the second 30% they are selectively expanded or compressed in the horizontal direction to account for the degree of squeezing of the scene during the rectilinear mapping process. Finally, in the outer ⅓ the face classifiers are adapted to account for the cylindrical mapping used in this region of the image scene. - In order to transform standard rectangular classifiers of a particular size, say 32×32 pixels, it may be advantageous in some embodiments to increase the size of face classifiers to, for example, 64×64. This larger size of classifier would enable greater granularity, and thus improved accuracy in transforming normal classifiers to distorted ones. This comes at the expense of additional computational burden for the face tracker. However we note that face tracking technology is quite broadly adopted across the industry and is known as a robust and well optimized technology. Thus the trade off of increasing classifiers from 32×32 to 64×64 for such faces should not cause a significant delay on most camera or smartphone platforms. The advantage is that pre-existing classifier cascades can be re-used, rather than having to train new, distorted ones.
- Having greater granularity for the classifiers is advantageous particularly when starting to rescale features inside the classifier individually, based on the distance to the optical center. In another embodiment, one can scale the whole 22×22 (this is a very good size for face classifiers) classifier with fixed dx,dy (computed as distance from the optical center). Having larger classifiers does not put excessive strain on the processing. Advantageously, it is opposite to that, because there are fewer scales to cover. In this case, the distance to subject is reduced.
- In an alternative embodiment an initial, shortened chain of modified classifiers is applied to the raw image (i.e. without any rectilinear or cylindrical re-mapping). This chain is composed of some of the initial face classifiers from a normal face detection chain. These initial classifiers are also, typically, the most aggressive to eliminate non-faces from consideration. These also tend to be simpler in form and the first four Haar classifiers from the Viola-Jones cascade are illustrated in
FIG. 4 (these may be implemented through a 22×22 pixel window in another embodiment). - Where a compressed rectilinear scaling would have been employed (as illustrated in
FIG. 1F , it is relatively straightforward to invert this scaling and expand (or contract) these classifiers in the horizontal direction to compensate for the distortion of faces in the raw image scene. (In some embodiments where this distortion is cylindrical towards the edges of the scene then classifiers may need to be scaled both in horizontal and vertical directions). Further, it is possible from a knowledge of the location at which each classifier is to be applied and, optionally, the size of the detection window, to perform the scaling of these classifiers dynamically. Thus only the original classifiers have to be stored together with data on the required rectilinear compression factor in the horizontal direction. The latter can easily be achieved using a look-up table (LUT) which is specific to the lens used. - This short classifier chain is employed to obtain a set of potential face regions which may then be re-mapped (using, for example, compressed rectilinear compression and/or cylindrical mapping) to enable the remainder of a complete face detection classifier chain to be applied to each potential face region. This embodiment relies on the fact that 99.99% of non-face regions are eliminated by applying the first few face classifiers; thus a small number of potential face regions would be re-mapped rather than the entire image scene before applying a full face detection process.
- In another embodiment, distortion may be compensated by a method that involves applying geometrical adjustments (function of distance to optical center) when an integral image is computed (in the cases where the template matching is done using II) or compensate for the distortion when computing the sub-sampled image used for face detection and face tracking (in the cases where template matching is done directly on Y data).
- Note that face classifiers can be divided into symmetric and non-symmetric classifiers. In certain embodiments it may be advantageous to use split classifier chains. For example right and left-hand face detector cascades may report detection of a half-face region—this may indicate that a full face is present but the second half is more or less distorted than would be expected, perhaps because it is closer to or farther from the lens than is normal. In such cases a more relaxed half, or full-face detector may be employed to confirm if a full face is actually present or a lower acceptance threshold may be set for the current detector. The following related apps assigned to the same assignee are incorporated by reference: US2007/0147820, US2010/0053368, US2008/0205712, US2009/0185753, US2008/0219517 and 2010/0054592, and U.S. Ser. No. 61/182,625, filed May 29, 2009 and U.S. Ser. No. 61/221,455, filed Jun. 29, 2009.
- In certain embodiments, when a face is tracked across the scene it may be desired to draw particular attention to that face and to emphasize it against the main scene. In one exemplary embodiment, suitable for applications in videotelephony, there may be one or more faces in the main scene but one (or more) of these is speaking. It is possible, using a stereo microphone to localize the speaking face.
- This face regions, and the other foreground regions (e.g. neck, shoulders & torso) are further processed to magnify them (e.g., in one embodiment by a factor of x1.8 times) against the background; in a simple embodiment this magnified face is simply composited onto the background image in the same location as the unmagnified original
- In a more sophisticated embodiment the other faces and the main background of the image are de-magnified and/or squeezed in order to keep the overall image size self-consistent. This may lead to some image distortion, particularly surrounding the “magnified” face, but this helps to emphasize the person speaking as illustrated in
FIGS. 4( a)-4(c). In this case the degree of magnification is generally <x1.5 to avoid excessive distortion across the remainder of the image. - In another embodiment, one can do a background+face mix or combination using an alpha map without worrying about distortions. Then, the face that speaks can be placed at the middle of the frame. In an another variation on this embodiment, the degree of magnification can be varied according to the strength or loudness of a speaker's voice, as illustrated at
FIGS. 5( a)-5(c). - In other embodiments based on the same scene re-mapping techniques, the rendering of the face region and surrounding portions of the image can be adjusted to emphasize one or more persons appearing in the final, re-mapped image of the captured scene. In one embodiment within a videophone system, a stereo microphone system triangulates the location of the person speaking and a portion of the scene is zoomed by a factor greater than one. The remaining portions of the image are zoomed by a factor less than one, so that the overall image is of approximately the same dimension. Thus persons appearing in the image appear larger when they are talking and it is easier for viewers to focus on the current speaker from a group.
- The present invention is not limited to the embodiments described above herein, which may be amended or modified without departing from the scope of the present invention.
- In methods that may be performed according to preferred embodiments herein and that may have been described above, the operations have been described in selected typographical sequences. However, the sequences have been selected and so ordered for typographical convenience and are not intended to imply any particular order for performing the operations.
- In addition, all references cited above herein, in addition to the background and summary of the invention sections, are hereby incorporated by reference into the detailed description of the preferred embodiments as disclosing alternative embodiments and components. Moreover, as extended depth of field (EDOF) technology may be combined with embodiments described herein into advantageous alternative embodiments, the following are incorporated by reference: US published patent applications numbers 20060256226, 20060519527, 20070239417, 20070236573, 20070236574, 20090128666, 20080095466, 20080316317, 20090147111, 20020145671, 20080075515, 20080021989, 20050107741, 20080028183, 20070045991. 20080008041, 20080009562, 20080038325, 20080045728, 20090531723, 20090190238, 20090141163, and 20080002185.
Claims (29)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/959,137 US20110216157A1 (en) | 2010-03-05 | 2010-12-02 | Object Detection and Rendering for Wide Field of View (WFOV) Image Acquisition Systems |
PCT/EP2011/052970 WO2011107448A2 (en) | 2010-03-05 | 2011-03-01 | Object detection and rendering for wide field of view (wfov) image acquisition systems |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US31126410P | 2010-03-05 | 2010-03-05 | |
US12/959,137 US20110216157A1 (en) | 2010-03-05 | 2010-12-02 | Object Detection and Rendering for Wide Field of View (WFOV) Image Acquisition Systems |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110216157A1 true US20110216157A1 (en) | 2011-09-08 |
Family
ID=44530984
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/959,089 Active 2032-04-18 US8872887B2 (en) | 2010-03-05 | 2010-12-02 | Object detection and rendering for wide field of view (WFOV) image acquisition systems |
US12/959,151 Expired - Fee Related US8692867B2 (en) | 2010-03-05 | 2010-12-02 | Object detection and rendering for wide field of view (WFOV) image acquisition systems |
US12/959,137 Abandoned US20110216157A1 (en) | 2010-03-05 | 2010-12-02 | Object Detection and Rendering for Wide Field of View (WFOV) Image Acquisition Systems |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/959,089 Active 2032-04-18 US8872887B2 (en) | 2010-03-05 | 2010-12-02 | Object detection and rendering for wide field of view (WFOV) image acquisition systems |
US12/959,151 Expired - Fee Related US8692867B2 (en) | 2010-03-05 | 2010-12-02 | Object detection and rendering for wide field of view (WFOV) image acquisition systems |
Country Status (2)
Country | Link |
---|---|
US (3) | US8872887B2 (en) |
WO (1) | WO2011107448A2 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110216156A1 (en) * | 2010-03-05 | 2011-09-08 | Tessera Technologies Ireland Limited | Object Detection and Rendering for Wide Field of View (WFOV) Image Acquisition Systems |
US20130307922A1 (en) * | 2012-05-17 | 2013-11-21 | Hong-Long Chou | Image pickup device and image synthesis method thereof |
US8723959B2 (en) | 2011-03-31 | 2014-05-13 | DigitalOptics Corporation Europe Limited | Face and other object tracking in off-center peripheral regions for nonlinear lens geometries |
US9091843B1 (en) | 2014-03-16 | 2015-07-28 | Hyperion Development, LLC | Optical assembly for a wide field of view point action camera with low track length to focal length ratio |
US9316808B1 (en) | 2014-03-16 | 2016-04-19 | Hyperion Development, LLC | Optical assembly for a wide field of view point action camera with a low sag aspheric lens element |
US9316820B1 (en) | 2014-03-16 | 2016-04-19 | Hyperion Development, LLC | Optical assembly for a wide field of view point action camera with low astigmatism |
US9494772B1 (en) | 2014-03-16 | 2016-11-15 | Hyperion Development, LLC | Optical assembly for a wide field of view point action camera with low field curvature |
US9726859B1 (en) | 2014-03-16 | 2017-08-08 | Navitar Industries, Llc | Optical assembly for a wide field of view camera with low TV distortion |
US9995910B1 (en) | 2014-03-16 | 2018-06-12 | Navitar Industries, Llc | Optical assembly for a compact wide field of view digital camera with high MTF |
US10139595B1 (en) | 2014-03-16 | 2018-11-27 | Navitar Industries, Llc | Optical assembly for a compact wide field of view digital camera with low first lens diameter to image diagonal ratio |
US10386604B1 (en) | 2014-03-16 | 2019-08-20 | Navitar Industries, Llc | Compact wide field of view digital camera with stray light impact suppression |
US10545314B1 (en) | 2014-03-16 | 2020-01-28 | Navitar Industries, Llc | Optical assembly for a compact wide field of view digital camera with low lateral chromatic aberration |
US20220224877A1 (en) * | 2017-04-01 | 2022-07-14 | Intel Corporation | Barreling and compositing of images |
Families Citing this family (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8308379B2 (en) | 2010-12-01 | 2012-11-13 | Digitaloptics Corporation | Three-pole tilt control system for camera module |
US8508652B2 (en) | 2011-02-03 | 2013-08-13 | DigitalOptics Corporation Europe Limited | Autofocus method |
US9866731B2 (en) | 2011-04-12 | 2018-01-09 | Smule, Inc. | Coordinating and mixing audiovisual content captured from geographically distributed performers |
US9088714B2 (en) * | 2011-05-17 | 2015-07-21 | Apple Inc. | Intelligent image blending for panoramic photography |
US9762794B2 (en) | 2011-05-17 | 2017-09-12 | Apple Inc. | Positional sensor-assisted perspective correction for panoramic photography |
US9247133B2 (en) | 2011-06-01 | 2016-01-26 | Apple Inc. | Image registration using sliding registration windows |
JP5020398B1 (en) * | 2011-06-29 | 2012-09-05 | パナソニック株式会社 | Image conversion apparatus, camera, image conversion method and program |
WO2013012578A1 (en) * | 2011-07-17 | 2013-01-24 | Ziva Corporation | Optical imaging with foveation |
US8493459B2 (en) | 2011-09-15 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Registration of distorted images |
US8493460B2 (en) | 2011-09-15 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Registration of differently scaled images |
US9800744B2 (en) | 2012-02-09 | 2017-10-24 | Brady Worldwide, Inc. | Systems and methods for label creation using object recognition |
WO2013136053A1 (en) | 2012-03-10 | 2013-09-19 | Digitaloptics Corporation | Miniature camera module with mems-actuated autofocus |
US9294667B2 (en) | 2012-03-10 | 2016-03-22 | Digitaloptics Corporation | MEMS auto focus miniature camera module with fixed and movable lens groups |
CN103428410B (en) * | 2012-05-17 | 2016-08-31 | 华晶科技股份有限公司 | Image capture unit and image synthesis method thereof |
US9098922B2 (en) | 2012-06-06 | 2015-08-04 | Apple Inc. | Adaptive image blending operations |
US10306140B2 (en) | 2012-06-06 | 2019-05-28 | Apple Inc. | Motion adaptive image slice selection |
WO2014072837A2 (en) | 2012-06-07 | 2014-05-15 | DigitalOptics Corporation Europe Limited | Mems fast focus camera module |
WO2014001095A1 (en) * | 2012-06-26 | 2014-01-03 | Thomson Licensing | Method for audiovisual content dubbing |
US8928730B2 (en) * | 2012-07-03 | 2015-01-06 | DigitalOptics Corporation Europe Limited | Method and system for correcting a distorted input image |
US9007520B2 (en) | 2012-08-10 | 2015-04-14 | Nanchang O-Film Optoelectronics Technology Ltd | Camera module with EMI shield |
US9001268B2 (en) | 2012-08-10 | 2015-04-07 | Nan Chang O-Film Optoelectronics Technology Ltd | Auto-focus camera module with flexible printed circuit extension |
US9242602B2 (en) | 2012-08-27 | 2016-01-26 | Fotonation Limited | Rearview imaging systems for vehicle |
US8988586B2 (en) | 2012-12-31 | 2015-03-24 | Digitaloptics Corporation | Auto-focus camera module with MEMS closed loop compensator |
KR101800617B1 (en) * | 2013-01-02 | 2017-12-20 | 삼성전자주식회사 | Display apparatus and Method for video calling thereof |
CN103945103B (en) * | 2013-01-17 | 2017-05-24 | 成都国翼电子技术有限公司 | Multi-plane secondary projection panoramic camera image distortion elimination method based on cylinder |
US9204052B2 (en) * | 2013-02-12 | 2015-12-01 | Nokia Technologies Oy | Method and apparatus for transitioning capture mode |
US8849064B2 (en) | 2013-02-14 | 2014-09-30 | Fotonation Limited | Method and apparatus for viewing images |
US20140307097A1 (en) | 2013-04-12 | 2014-10-16 | DigitalOptics Corporation Europe Limited | Method of Generating a Digital Video Image Using a Wide-Angle Field of View Lens |
US9832378B2 (en) | 2013-06-06 | 2017-11-28 | Apple Inc. | Exposure mapping and dynamic thresholding for blending of multiple images using floating exposure |
US9262801B2 (en) | 2014-04-01 | 2016-02-16 | Gopro, Inc. | Image taping in a multi-camera array |
US10154194B2 (en) * | 2014-12-31 | 2018-12-11 | Logan Gilpin | Video capturing and formatting system |
CN105657276A (en) * | 2016-02-29 | 2016-06-08 | 广东欧珀移动通信有限公司 | Control method, control device and electronic device |
US10742878B2 (en) * | 2016-06-21 | 2020-08-11 | Symbol Technologies, Llc | Stereo camera device with improved depth resolution |
US10528850B2 (en) * | 2016-11-02 | 2020-01-07 | Ford Global Technologies, Llc | Object classification adjustment based on vehicle communication |
US10185878B2 (en) | 2017-02-28 | 2019-01-22 | Microsoft Technology Licensing, Llc | System and method for person counting in image data |
US11182639B2 (en) | 2017-04-16 | 2021-11-23 | Facebook, Inc. | Systems and methods for provisioning content |
US10331960B2 (en) | 2017-05-10 | 2019-06-25 | Fotonation Limited | Methods for detecting, identifying and displaying object information with a multi-camera vision system |
US11615566B2 (en) | 2017-05-10 | 2023-03-28 | Fotonation Limited | Multi-camera vehicle vision system and method |
US10740627B2 (en) | 2017-05-10 | 2020-08-11 | Fotonation Limited | Multi-camera vision system and method of monitoring |
US10491819B2 (en) | 2017-05-10 | 2019-11-26 | Fotonation Limited | Portable system providing augmented vision of surroundings |
EP3667414B1 (en) | 2018-12-14 | 2020-11-25 | Axis AB | A system for panoramic imaging |
CN111612812B (en) * | 2019-02-22 | 2023-11-03 | 富士通株式会社 | Target object detection method, detection device and electronic equipment |
CN111667398B (en) * | 2019-03-07 | 2023-08-01 | 株式会社理光 | Image processing method, apparatus and computer readable storage medium |
CN112312056A (en) * | 2019-08-01 | 2021-02-02 | 普兰特龙尼斯公司 | Video conferencing with adaptive lens distortion correction and image distortion reduction |
US11640701B2 (en) | 2020-07-31 | 2023-05-02 | Analog Devices International Unlimited Company | People detection and tracking with multiple features augmented with orientation and size based classifiers |
Citations (80)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US1906509A (en) * | 1928-01-17 | 1933-05-02 | Firm Photogrammetrie G M B H | Correction for distortion the component pictures produced from different photographic registering devices |
US3251283A (en) * | 1964-02-11 | 1966-05-17 | Itek Corp | Photographic system |
US3356002A (en) * | 1965-07-14 | 1967-12-05 | Gen Precision Inc | Wide angle optical system |
US4555168A (en) * | 1981-08-24 | 1985-11-26 | Walter Meier | Device for projecting steroscopic, anamorphotically compressed pairs of images on to a spherically curved wide-screen surface |
US5000549A (en) * | 1988-09-30 | 1991-03-19 | Canon Kabushiki Kaisha | Zoom lens for stabilizing the image |
US5359513A (en) * | 1992-11-25 | 1994-10-25 | Arch Development Corporation | Method and system for detection of interval change in temporally sequential chest images |
US5526045A (en) * | 1983-12-29 | 1996-06-11 | Matsushita Electric Industrial Co., Ltd. | Camera apparatus which automatically corrects image fluctuations |
US5579169A (en) * | 1993-09-13 | 1996-11-26 | Nikon Corporation | Underwater wide angle lens |
US5585966A (en) * | 1993-12-28 | 1996-12-17 | Nikon Corporation | Zoom lens with vibration reduction function |
US5633756A (en) * | 1991-10-31 | 1997-05-27 | Canon Kabushiki Kaisha | Image stabilizing apparatus |
US5675380A (en) * | 1994-12-29 | 1997-10-07 | U.S. Philips Corporation | Device for forming an image and method of correcting geometrical optical distortions in an image |
US5850470A (en) * | 1995-08-30 | 1998-12-15 | Siemens Corporate Research, Inc. | Neural network for locating and recognizing a deformable object |
US5960108A (en) * | 1997-06-12 | 1999-09-28 | Apple Computer, Inc. | Method and system for creating an image-based virtual reality environment utilizing a fisheye lens |
US5986668A (en) * | 1997-08-01 | 1999-11-16 | Microsoft Corporation | Deghosting method and apparatus for construction of image mosaics |
US6044181A (en) * | 1997-08-01 | 2000-03-28 | Microsoft Corporation | Focal length estimation method and apparatus for construction of panoramic mosaic images |
US6078701A (en) * | 1997-08-01 | 2000-06-20 | Sarnoff Corporation | Method and apparatus for performing local to global multiframe alignment to construct mosaic images |
US6219089B1 (en) * | 1997-05-08 | 2001-04-17 | Be Here Corporation | Method and apparatus for electronically distributing images from a panoptic camera system |
US6222683B1 (en) * | 1999-01-13 | 2001-04-24 | Be Here Corporation | Panoramic imaging arrangement |
US6392687B1 (en) * | 1997-05-08 | 2002-05-21 | Be Here Corporation | Method and apparatus for implementing a panoptic camera system |
US20020063802A1 (en) * | 1994-05-27 | 2002-05-30 | Be Here Corporation | Wide-angle dewarping method and apparatus |
US20020114536A1 (en) * | 1998-09-25 | 2002-08-22 | Yalin Xiong | Aligning rectilinear images in 3D through projective registration and calibration |
US6466254B1 (en) * | 1997-05-08 | 2002-10-15 | Be Here Corporation | Method and apparatus for electronically distributing motion panoramic images |
US20030103063A1 (en) * | 2001-12-03 | 2003-06-05 | Tempest Microsystems | Panoramic imaging and display system with canonical magnifier |
US6664956B1 (en) * | 2000-10-12 | 2003-12-16 | Momentum Bilgisayar, Yazilim, Danismanlik, Ticaret A. S. | Method for generating a personalized 3-D face model |
US20040061787A1 (en) * | 2002-09-30 | 2004-04-01 | Zicheng Liu | Foveated wide-angle imaging system and method for capturing and viewing wide-angle images in real time |
US6750903B1 (en) * | 1998-03-05 | 2004-06-15 | Hitachi, Ltd. | Super high resolution camera |
US20040233461A1 (en) * | 1999-11-12 | 2004-11-25 | Armstrong Brian S. | Methods and apparatus for measuring orientation and distance |
US20050166054A1 (en) * | 2003-12-17 | 2005-07-28 | Yuji Fujimoto | Data processing apparatus and method and encoding device of same |
US20050169529A1 (en) * | 2004-02-03 | 2005-08-04 | Yuri Owechko | Active learning system for object fingerprinting |
US20060093238A1 (en) * | 2004-10-28 | 2006-05-04 | Eran Steinberg | Method and apparatus for red-eye detection in an acquired digital image using face recognition |
US7058237B2 (en) * | 2002-06-28 | 2006-06-06 | Microsoft Corporation | Real-time wide-angle image correction system and method for computer image viewing |
US20060140449A1 (en) * | 2004-12-27 | 2006-06-29 | Hitachi, Ltd. | Apparatus and method for detecting vehicle |
US20070172150A1 (en) * | 2006-01-19 | 2007-07-26 | Shuxue Quan | Hand jitter reduction compensating for rotational motion |
US20070206941A1 (en) * | 2006-03-03 | 2007-09-06 | Atsushi Maruyama | Imaging apparatus and imaging method |
US7280289B2 (en) * | 2005-02-21 | 2007-10-09 | Fujinon Corporation | Wide angle imaging lens |
US7327899B2 (en) * | 2002-06-28 | 2008-02-05 | Microsoft Corp. | System and method for head size equalization in 360 degree panoramic images |
US20080075352A1 (en) * | 2006-09-27 | 2008-03-27 | Hisae Shibuya | Defect classification method and apparatus, and defect inspection apparatus |
US20080175436A1 (en) * | 2007-01-24 | 2008-07-24 | Sanyo Electric Co., Ltd. | Image processor, vehicle, and image processing method |
US7495845B2 (en) * | 2005-10-21 | 2009-02-24 | Fujinon Corporation | Wide-angle imaging lens |
US7499638B2 (en) * | 2003-08-28 | 2009-03-03 | Olympus Corporation | Object recognition apparatus |
US20090074323A1 (en) * | 2006-05-01 | 2009-03-19 | Nikon Corporation | Image processing method, carrier medium carrying image processing program, image processing apparatus, and imaging apparatus |
US20090180713A1 (en) * | 2008-01-10 | 2009-07-16 | Samsung Electronics Co., Ltd | Method and system of adaptive reformatting of digital image |
US20090220156A1 (en) * | 2008-02-29 | 2009-09-03 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, program, and storage medium |
US7609850B2 (en) * | 2004-12-09 | 2009-10-27 | Sony United Kingdom Limited | Data processing apparatus and method |
US7613357B2 (en) * | 2005-09-20 | 2009-11-03 | Gm Global Technology Operations, Inc. | Method for warped image object recognition |
US7612946B2 (en) * | 2006-10-24 | 2009-11-03 | Nanophotonics Co., Ltd. | Wide-angle lenses |
US20090310828A1 (en) * | 2007-10-12 | 2009-12-17 | The University Of Houston System | An automated method for human face modeling and relighting with application to face recognition |
US20100002071A1 (en) * | 2004-04-30 | 2010-01-07 | Grandeye Ltd. | Multiple View and Multiple Object Processing in Wide-Angle Video Camera |
US20100014721A1 (en) * | 2004-01-22 | 2010-01-21 | Fotonation Ireland Limited | Classification System for Consumer Digital Images using Automatic Workflow and Face Detection and Recognition |
US20100033551A1 (en) * | 2008-08-08 | 2010-02-11 | Adobe Systems Incorporated | Content-Aware Wide-Angle Images |
US20100046837A1 (en) * | 2006-11-21 | 2010-02-25 | Koninklijke Philips Electronics N.V. | Generation of depth map for an image |
US20100066822A1 (en) * | 2004-01-22 | 2010-03-18 | Fotonation Ireland Limited | Classification and organization of consumer digital images using workflow, and face detection and recognition |
US20100166300A1 (en) * | 2008-12-31 | 2010-07-01 | Stmicroelectronics S.R.I. | Method of generating motion vectors of images of a video sequence |
US20100215251A1 (en) * | 2007-10-11 | 2010-08-26 | Koninklijke Philips Electronics N.V. | Method and device for processing a depth-map |
US7835071B2 (en) * | 2007-09-10 | 2010-11-16 | Sumitomo Electric Industries, Ltd. | Far-infrared camera lens, lens unit, and imaging apparatus |
US7843652B2 (en) * | 2005-10-21 | 2010-11-30 | Fujinon Corporation | Wide-angle imaging lens |
US20100305869A1 (en) * | 2003-08-01 | 2010-12-02 | Dexcom, Inc. | Transcutaneous analyte sensor |
US20100303381A1 (en) * | 2007-05-15 | 2010-12-02 | Koninklijke Philips Electronics N.V. | Imaging system and imaging method for imaging a region of interest |
US7848548B1 (en) * | 2007-06-11 | 2010-12-07 | Videomining Corporation | Method and system for robust demographic classification using pose independent model from sequence of face images |
US20110002071A1 (en) * | 2008-03-06 | 2011-01-06 | Keqing Zhang | Leakage protective plug |
US7907793B1 (en) * | 2001-05-04 | 2011-03-15 | Legend Films Inc. | Image sequence depth enhancement system and method |
US20110085049A1 (en) * | 2009-10-14 | 2011-04-14 | Zoran Corporation | Method and apparatus for image stabilization |
US7929221B2 (en) * | 2006-04-10 | 2011-04-19 | Alex Ning | Ultra-wide angle objective lens |
US20110116720A1 (en) * | 2009-11-17 | 2011-05-19 | Samsung Electronics Co., Ltd. | Method and apparatus for image processing |
US20110216158A1 (en) * | 2010-03-05 | 2011-09-08 | Tessera Technologies Ireland Limited | Object Detection and Rendering for Wide Field of View (WFOV) Image Acquisition Systems |
US20110298795A1 (en) * | 2009-02-18 | 2011-12-08 | Koninklijke Philips Electronics N.V. | Transferring of 3d viewer metadata |
US8094183B2 (en) * | 2006-08-11 | 2012-01-10 | Funai Electric Co., Ltd. | Panoramic imaging device |
US8134479B2 (en) * | 2008-03-27 | 2012-03-13 | Mando Corporation | Monocular motion stereo-based free parking space detection apparatus and method |
US8144033B2 (en) * | 2007-09-26 | 2012-03-27 | Nissan Motor Co., Ltd. | Vehicle periphery monitoring apparatus and image displaying method |
US8194993B1 (en) * | 2008-08-29 | 2012-06-05 | Adobe Systems Incorporated | Method and apparatus for matching image metadata to a profile database to determine image processing parameters |
US8264524B1 (en) * | 2008-09-17 | 2012-09-11 | Grandeye Limited | System for streaming multiple regions deriving from a wide-angle camera |
US20120249841A1 (en) * | 2011-03-31 | 2012-10-04 | Tessera Technologies Ireland Limited | Scene enhancements in off-center peripheral regions for nonlinear lens geometries |
US20120249726A1 (en) * | 2011-03-31 | 2012-10-04 | Tessera Technologies Ireland Limited | Face and other object detection and tracking in off-center peripheral regions for nonlinear lens geometries |
US20120249725A1 (en) * | 2011-03-31 | 2012-10-04 | Tessera Technologies Ireland Limited | Face and other object tracking in off-center peripheral regions for nonlinear lens geometries |
US20120249727A1 (en) * | 2011-03-31 | 2012-10-04 | Tessera Technologies Ireland Limited | Superresolution enhancment of peripheral regions in nonlinear lens geometries |
US8311344B2 (en) * | 2008-02-15 | 2012-11-13 | Digitalsmiths, Inc. | Systems and methods for semantically classifying shots in video |
US8340453B1 (en) * | 2008-08-29 | 2012-12-25 | Adobe Systems Incorporated | Metadata-driven method and apparatus for constraining solution space in image processing techniques |
US8379014B2 (en) * | 2007-10-11 | 2013-02-19 | Mvtec Software Gmbh | System and method for 3D object recognition |
US8493459B2 (en) * | 2011-09-15 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Registration of distorted images |
US8493460B2 (en) * | 2011-09-15 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Registration of differently scaled images |
Family Cites Families (170)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2748678B2 (en) | 1990-10-09 | 1998-05-13 | 松下電器産業株式会社 | Gradation correction method and gradation correction device |
US5508734A (en) | 1994-07-27 | 1996-04-16 | International Business Machines Corporation | Method and apparatus for hemispheric imaging which emphasizes peripheral content |
US5724456A (en) * | 1995-03-31 | 1998-03-03 | Polaroid Corporation | Brightness adjustment of images using digital scene analysis |
US5991456A (en) | 1996-05-29 | 1999-11-23 | Science And Technology Corporation | Method of improving a digital image |
US5978519A (en) | 1996-08-06 | 1999-11-02 | Xerox Corporation | Automatic image cropping |
US5818975A (en) | 1996-10-28 | 1998-10-06 | Eastman Kodak Company | Method and apparatus for area selective exposure adjustment |
US6249315B1 (en) | 1997-03-24 | 2001-06-19 | Jack M. Holm | Strategy for pictorial digital image processing |
US6407777B1 (en) | 1997-10-09 | 2002-06-18 | Deluca Michael Joseph | Red-eye filter method and apparatus |
US7352394B1 (en) | 1997-10-09 | 2008-04-01 | Fotonation Vision Limited | Image modification based on red-eye filter analysis |
US7738015B2 (en) | 1997-10-09 | 2010-06-15 | Fotonation Vision Limited | Red-eye filter method and apparatus |
US7042505B1 (en) | 1997-10-09 | 2006-05-09 | Fotonation Ireland Ltd. | Red-eye filter method and apparatus |
US7630006B2 (en) | 1997-10-09 | 2009-12-08 | Fotonation Ireland Limited | Detecting red eye filter and apparatus using meta-data |
EP0913751B1 (en) * | 1997-11-03 | 2003-09-03 | Volkswagen Aktiengesellschaft | Autonomous vehicle and guiding method for an autonomous vehicle |
US6035072A (en) | 1997-12-08 | 2000-03-07 | Read; Robert Lee | Mapping defects or dirt dynamically affecting an image acquisition device |
US6268939B1 (en) | 1998-01-08 | 2001-07-31 | Xerox Corporation | Method and apparatus for correcting luminance and chrominance data in digital color images |
US6192149B1 (en) | 1998-04-08 | 2001-02-20 | Xerox Corporation | Method and apparatus for automatic detection of image target gamma |
JPH11298780A (en) * | 1998-04-10 | 1999-10-29 | Nhk Eng Service | Wide-area image-pickup device and spherical cavity projection device |
US6097470A (en) | 1998-05-28 | 2000-08-01 | Eastman Kodak Company | Digital photofinishing system including scene balance, contrast normalization, and image sharpening digital image processing |
US6456732B1 (en) | 1998-09-11 | 2002-09-24 | Hewlett-Packard Company | Automatic rotation, cropping and scaling of images for printing |
JP3291259B2 (en) | 1998-11-11 | 2002-06-10 | キヤノン株式会社 | Image processing method and recording medium |
US6473199B1 (en) | 1998-12-18 | 2002-10-29 | Eastman Kodak Company | Correcting exposure and tone scale of digital images captured by an image capture device |
US6396599B1 (en) | 1998-12-21 | 2002-05-28 | Eastman Kodak Company | Method and apparatus for modifying a portion of an image in accordance with colorimetric parameters |
US6282317B1 (en) | 1998-12-31 | 2001-08-28 | Eastman Kodak Company | Method for automatic determination of main subjects in photographic images |
US6438264B1 (en) | 1998-12-31 | 2002-08-20 | Eastman Kodak Company | Method for compensating image color when adjusting the contrast of a digital color image |
US6421468B1 (en) | 1999-01-06 | 2002-07-16 | Seiko Epson Corporation | Method and apparatus for sharpening an image by scaling elements of a frequency-domain representation |
US6393148B1 (en) | 1999-05-13 | 2002-05-21 | Hewlett-Packard Company | Contrast enhancement of an image using luminance and RGB statistical metrics |
US7292261B1 (en) * | 1999-08-20 | 2007-11-06 | Patrick Teo | Virtual reality camera |
US6504951B1 (en) | 1999-11-29 | 2003-01-07 | Eastman Kodak Company | Method for detecting sky in images |
US6516147B2 (en) | 1999-12-20 | 2003-02-04 | Polaroid Corporation | Scene recognition method and system using brightness and ranging mapping |
US6618511B1 (en) * | 1999-12-31 | 2003-09-09 | Stmicroelectronics, Inc. | Perspective correction for panoramic digital camera with remote processing |
US6654507B2 (en) | 2000-12-14 | 2003-11-25 | Eastman Kodak Company | Automatically producing an image of a portion of a photographic image |
US7065256B2 (en) | 2001-02-08 | 2006-06-20 | Dblur Technologies Ltd. | Method for processing a digital image |
US7262798B2 (en) * | 2001-09-17 | 2007-08-28 | Hewlett-Packard Development Company, L.P. | System and method for simulating fill flash in photography |
WO2004063989A2 (en) | 2003-01-16 | 2004-07-29 | D-Blur Technologies Ltd. | Camera with image enhancement functions |
US20070236573A1 (en) | 2006-03-31 | 2007-10-11 | D-Blur Technologies Ltd. | Combined design of optical and image processing elements |
US7773316B2 (en) | 2003-01-16 | 2010-08-10 | Tessera International, Inc. | Optics for an extended depth of field |
US8036458B2 (en) | 2007-11-08 | 2011-10-11 | DigitalOptics Corporation Europe Limited | Detecting redeye defects in digital images |
US8363951B2 (en) | 2007-03-05 | 2013-01-29 | DigitalOptics Corporation Europe Limited | Face recognition training method and apparatus |
US8989453B2 (en) * | 2003-06-26 | 2015-03-24 | Fotonation Limited | Digital image processing using face detection information |
US7689009B2 (en) * | 2005-11-18 | 2010-03-30 | Fotonation Vision Ltd. | Two stage detection for photographic eye artifacts |
US9160897B2 (en) | 2007-06-14 | 2015-10-13 | Fotonation Limited | Fast motion estimation method |
US8989516B2 (en) | 2007-09-18 | 2015-03-24 | Fotonation Limited | Image processing method and apparatus |
US7536036B2 (en) | 2004-10-28 | 2009-05-19 | Fotonation Vision Limited | Method and apparatus for red-eye detection in an acquired digital image |
US8682097B2 (en) | 2006-02-14 | 2014-03-25 | DigitalOptics Corporation Europe Limited | Digital image enhancement with reference images |
US8948468B2 (en) | 2003-06-26 | 2015-02-03 | Fotonation Limited | Modification of viewing parameters for digital images using face detection information |
US8330831B2 (en) | 2003-08-05 | 2012-12-11 | DigitalOptics Corporation Europe Limited | Method of gathering visual meta data using a reference image |
US8498452B2 (en) | 2003-06-26 | 2013-07-30 | DigitalOptics Corporation Europe Limited | Digital image processing using face detection information |
US9129381B2 (en) | 2003-06-26 | 2015-09-08 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US8155397B2 (en) * | 2007-09-26 | 2012-04-10 | DigitalOptics Corporation Europe Limited | Face tracking in a camera processor |
US7639889B2 (en) | 2004-11-10 | 2009-12-29 | Fotonation Ireland Ltd. | Method of notifying users regarding motion artifacts based on image analysis |
US7315630B2 (en) * | 2003-06-26 | 2008-01-01 | Fotonation Vision Limited | Perfecting of digital image rendering parameters within rendering devices using face detection |
US7620218B2 (en) | 2006-08-11 | 2009-11-17 | Fotonation Ireland Limited | Real-time face tracking with reference images |
US7574016B2 (en) | 2003-06-26 | 2009-08-11 | Fotonation Vision Limited | Digital image processing using face detection information |
US7792970B2 (en) | 2005-06-17 | 2010-09-07 | Fotonation Vision Limited | Method for establishing a paired connection between media devices |
US7702236B2 (en) * | 2006-02-14 | 2010-04-20 | Fotonation Vision Limited | Digital image acquisition device with built in dust and sensor mapping capability |
US7587068B1 (en) | 2004-01-22 | 2009-09-08 | Fotonation Vision Limited | Classification database for consumer digital images |
US7920723B2 (en) | 2005-11-18 | 2011-04-05 | Tessera Technologies Ireland Limited | Two stage detection for photographic eye artifacts |
US7616233B2 (en) * | 2003-06-26 | 2009-11-10 | Fotonation Vision Limited | Perfecting of digital image capture parameters within acquisition devices using face detection |
US7844076B2 (en) | 2003-06-26 | 2010-11-30 | Fotonation Vision Limited | Digital image processing using face detection and skin tone information |
US8199222B2 (en) | 2007-03-05 | 2012-06-12 | DigitalOptics Corporation Europe Limited | Low-light video frame enhancement |
US8170294B2 (en) | 2006-11-10 | 2012-05-01 | DigitalOptics Corporation Europe Limited | Method of detecting redeye in a digital image |
US7471846B2 (en) | 2003-06-26 | 2008-12-30 | Fotonation Vision Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
US7506057B2 (en) | 2005-06-17 | 2009-03-17 | Fotonation Vision Limited | Method for establishing a paired connection between media devices |
US7680342B2 (en) | 2004-08-16 | 2010-03-16 | Fotonation Vision Limited | Indoor/outdoor classification in digital images |
US8254674B2 (en) | 2004-10-28 | 2012-08-28 | DigitalOptics Corporation Europe Limited | Analyzing partial face regions for red-eye detection in acquired digital images |
US7685341B2 (en) | 2005-05-06 | 2010-03-23 | Fotonation Vision Limited | Remote control apparatus for consumer electronic appliances |
US7362368B2 (en) | 2003-06-26 | 2008-04-22 | Fotonation Vision Limited | Perfecting the optics within a digital image acquisition device using face detection |
US8417055B2 (en) | 2007-03-05 | 2013-04-09 | DigitalOptics Corporation Europe Limited | Image processing method and apparatus |
US7587085B2 (en) | 2004-10-28 | 2009-09-08 | Fotonation Vision Limited | Method and apparatus for red-eye detection in an acquired digital image |
US7636486B2 (en) | 2004-11-10 | 2009-12-22 | Fotonation Ireland Ltd. | Method of determining PSF using multiple instances of a nominally similar scene |
US7565030B2 (en) | 2003-06-26 | 2009-07-21 | Fotonation Vision Limited | Detecting orientation of digital images using face detection information |
US8494286B2 (en) | 2008-02-05 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Face detection in mid-shot digital images |
US7269292B2 (en) | 2003-06-26 | 2007-09-11 | Fotonation Vision Limited | Digital image adjustable compression and resolution using face detection information |
US7792335B2 (en) * | 2006-02-24 | 2010-09-07 | Fotonation Vision Limited | Method and apparatus for selective disqualification of digital images |
US7970182B2 (en) | 2005-11-18 | 2011-06-28 | Tessera Technologies Ireland Limited | Two stage detection for photographic eye artifacts |
US8896725B2 (en) | 2007-06-21 | 2014-11-25 | Fotonation Limited | Image capture device with contemporaneous reference image capture mechanism |
US7440593B1 (en) | 2003-06-26 | 2008-10-21 | Fotonation Vision Limited | Method of improving orientation and color balance of digital images using face detection information |
US8180173B2 (en) * | 2007-09-21 | 2012-05-15 | DigitalOptics Corporation Europe Limited | Flash artifact eye defect correction in blurred images using anisotropic blurring |
US7606417B2 (en) | 2004-08-16 | 2009-10-20 | Fotonation Vision Limited | Foreground/background segmentation in digital images with differential exposure calculations |
US8593542B2 (en) | 2005-12-27 | 2013-11-26 | DigitalOptics Corporation Europe Limited | Foreground/background separation using reference images |
US8339462B2 (en) | 2008-01-28 | 2012-12-25 | DigitalOptics Corporation Europe Limited | Methods and apparatuses for addressing chromatic abberations and purple fringing |
US7747596B2 (en) * | 2005-06-17 | 2010-06-29 | Fotonation Vision Ltd. | Server device, user interface appliance, and media processing network |
US8264576B2 (en) | 2007-03-05 | 2012-09-11 | DigitalOptics Corporation Europe Limited | RGBW sensor array |
US7317815B2 (en) * | 2003-06-26 | 2008-01-08 | Fotonation Vision Limited | Digital image processing composition using face detection information |
US9412007B2 (en) | 2003-08-05 | 2016-08-09 | Fotonation Limited | Partial face detector red-eye filter method and apparatus |
US20050031224A1 (en) | 2003-08-05 | 2005-02-10 | Yury Prilutsky | Detecting red eye filter and apparatus using meta-data |
US20100053367A1 (en) | 2003-08-05 | 2010-03-04 | Fotonation Ireland Limited | Partial face tracker for red-eye filter method and apparatus |
US20050140801A1 (en) | 2003-08-05 | 2005-06-30 | Yury Prilutsky | Optimized performance and performance for red-eye filter method and apparatus |
US8520093B2 (en) | 2003-08-05 | 2013-08-27 | DigitalOptics Corporation Europe Limited | Face tracker and partial face tracker for red-eye filter method and apparatus |
US7315658B2 (en) * | 2003-09-30 | 2008-01-01 | Fotonation Vision Limited | Digital camera |
US7310450B2 (en) | 2003-09-30 | 2007-12-18 | Fotonation Vision Limited | Method of detecting and correcting dust in digital images based on aura and shadow region analysis |
US7369712B2 (en) | 2003-09-30 | 2008-05-06 | Fotonation Vision Limited | Automated statistical self-calibrating detection and removal of blemishes in digital images based on multiple occurrences of dust in images |
US7206461B2 (en) | 2003-09-30 | 2007-04-17 | Fotonation Vision Limited | Digital image acquisition and processing system |
US7295233B2 (en) | 2003-09-30 | 2007-11-13 | Fotonation Vision Limited | Detection and removal of blemishes in digital images utilizing original images of defocused scenes |
US7340109B2 (en) | 2003-09-30 | 2008-03-04 | Fotonation Vision Limited | Automated statistical self-calibrating detection and removal of blemishes in digital images dependent upon changes in extracted parameter values |
US7676110B2 (en) | 2003-09-30 | 2010-03-09 | Fotonation Vision Limited | Determination of need to service a camera based on detection of blemishes in digital images |
US7590305B2 (en) | 2003-09-30 | 2009-09-15 | Fotonation Vision Limited | Digital camera with built-in lens calibration table |
US7424170B2 (en) | 2003-09-30 | 2008-09-09 | Fotonation Vision Limited | Automated statistical self-calibrating detection and removal of blemishes in digital images based on determining probabilities based on image analysis of single images |
US7308156B2 (en) | 2003-09-30 | 2007-12-11 | Fotonation Vision Limited | Automated statistical self-calibrating detection and removal of blemishes in digital images based on a dust map developed from actual image data |
US8369650B2 (en) | 2003-09-30 | 2013-02-05 | DigitalOptics Corporation Europe Limited | Image defect map creation using batches of digital images |
US7326195B2 (en) | 2003-11-18 | 2008-02-05 | Boston Scientific Scimed, Inc. | Targeted cooling of tissue within a body |
US7558408B1 (en) | 2004-01-22 | 2009-07-07 | Fotonation Vision Limited | Classification system for consumer digital images using workflow and user interface modules, and face detection and recognition |
US7555148B1 (en) | 2004-01-22 | 2009-06-30 | Fotonation Vision Limited | Classification system for consumer digital images using workflow, face detection, normalization, and face recognition |
US7551755B1 (en) | 2004-01-22 | 2009-06-23 | Fotonation Vision Limited | Classification and organization of consumer digital images using workflow, and face detection and recognition |
JP2005252625A (en) | 2004-03-03 | 2005-09-15 | Canon Inc | Image pickup device and image processing method |
EP2174925B1 (en) | 2004-07-21 | 2014-10-15 | Dow Global Technologies LLC | Conversion of a multihydroxylated-aliphatic hydrocarbon or ester thereof to a chlorohydrin |
EP1788059A4 (en) * | 2004-09-02 | 2009-11-11 | Yokohama Rubber Co Ltd | Adhesive compositions for optical fibers |
US8320641B2 (en) | 2004-10-28 | 2012-11-27 | DigitalOptics Corporation Europe Limited | Method and apparatus for red-eye detection using preview or other reference images |
US7639888B2 (en) * | 2004-11-10 | 2009-12-29 | Fotonation Ireland Ltd. | Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts |
US7715597B2 (en) | 2004-12-29 | 2010-05-11 | Fotonation Ireland Limited | Method and component for image recognition |
US8488023B2 (en) | 2009-05-20 | 2013-07-16 | DigitalOptics Corporation Europe Limited | Identifying facial expressions in acquired digital images |
US8503800B2 (en) | 2007-03-05 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Illumination detection using classifier chains |
US7315631B1 (en) | 2006-08-11 | 2008-01-01 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US20060182437A1 (en) | 2005-02-11 | 2006-08-17 | Williams Karen E | Method and apparatus for previewing a panoramic image on a digital camera |
US7694048B2 (en) | 2005-05-06 | 2010-04-06 | Fotonation Vision Limited | Remote control apparatus for printer appliances |
US7839429B2 (en) | 2005-05-26 | 2010-11-23 | Hewlett-Packard Development Company, L.P. | In-camera panorama stitching method and apparatus |
DE102005038029B3 (en) | 2005-08-08 | 2006-11-09 | Otto Bock Healthcare Ip Gmbh & Co. Kg | Wheelchair, with a seat which can be raised and lowered, has slits in the rear ends of the longitudinal rails under the seat to take the lower end of the backrest with a sliding movement for seat height adjustment |
JP2007124088A (en) * | 2005-10-26 | 2007-05-17 | Olympus Corp | Image photographing device |
US8115840B2 (en) | 2005-11-10 | 2012-02-14 | DigitalOptics Corporation International | Image enhancement in the mosaic domain |
US7599577B2 (en) | 2005-11-18 | 2009-10-06 | Fotonation Vision Limited | Method and apparatus of correcting hybrid flash artifacts in digital images |
US8154636B2 (en) | 2005-12-21 | 2012-04-10 | DigitalOptics Corporation International | Image enhancement using hardware-based deconvolution |
US7692696B2 (en) | 2005-12-27 | 2010-04-06 | Fotonation Vision Limited | Digital image acquisition system with portrait mode |
IES20060558A2 (en) * | 2006-02-14 | 2006-11-01 | Fotonation Vision Ltd | Image blurring |
WO2007095553A2 (en) | 2006-02-14 | 2007-08-23 | Fotonation Vision Limited | Automatic detection and correction of non-red eye flash defects |
US7469071B2 (en) | 2006-02-14 | 2008-12-23 | Fotonation Vision Limited | Image blurring |
US7804983B2 (en) | 2006-02-24 | 2010-09-28 | Fotonation Vision Limited | Digital image acquisition control and correction method and apparatus |
US7551754B2 (en) | 2006-02-24 | 2009-06-23 | Fotonation Vision Limited | Method and apparatus for selective rejection of digital images |
US8266413B2 (en) * | 2006-03-14 | 2012-09-11 | The Board Of Trustees Of The University Of Illinois | Processor architecture for multipass processing of instructions downstream of a stalled instruction |
US20070236574A1 (en) | 2006-03-31 | 2007-10-11 | D-Blur Technologies Ltd. | Digital filtering with noise gain limit |
US20070239417A1 (en) | 2006-03-31 | 2007-10-11 | D-Blur Technologies Ltd. | Camera performance simulation |
IES20060564A2 (en) | 2006-05-03 | 2006-11-01 | Fotonation Vision Ltd | Improved foreground / background separation |
IES20070229A2 (en) | 2006-06-05 | 2007-10-03 | Fotonation Vision Ltd | Image acquisition method and apparatus |
WO2007146176A2 (en) | 2006-06-08 | 2007-12-21 | The Board Of Regents Of The University Of Nebraska-Lincoln | System and methods for non-destructive analysis |
WO2008023280A2 (en) * | 2006-06-12 | 2008-02-28 | Fotonation Vision Limited | Advances in extending the aam techniques from grayscale to color images |
US8923095B2 (en) | 2006-07-05 | 2014-12-30 | Westerngeco L.L.C. | Short circuit protection for serially connected nodes in a hydrocarbon exploration or production electrical system |
US8126993B2 (en) | 2006-07-18 | 2012-02-28 | Nvidia Corporation | System, method, and computer program product for communicating sub-device state information |
US7515740B2 (en) | 2006-08-02 | 2009-04-07 | Fotonation Vision Limited | Face recognition with combined PCA-based datasets |
WO2008022005A2 (en) | 2006-08-09 | 2008-02-21 | Fotonation Vision Limited | Detection and correction of flash artifacts from airborne particulates |
EP1889608B1 (en) | 2006-08-09 | 2012-11-28 | Korea Atomic Energy Research Institute | Therapeutic hydrogel for atopic dermatitis and preparation method thereof |
US20090115915A1 (en) | 2006-08-09 | 2009-05-07 | Fotonation Vision Limited | Camera Based Feedback Loop Calibration of a Projection Device |
US7916897B2 (en) | 2006-08-11 | 2011-03-29 | Tessera Technologies Ireland Limited | Face tracking for controlling imaging parameters |
US7403643B2 (en) | 2006-08-11 | 2008-07-22 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US20080075515A1 (en) * | 2006-09-26 | 2008-03-27 | William Thomas Large | Ergonomic and Key Recognition Advantage by Numeric Key Elevation |
US7907791B2 (en) | 2006-11-27 | 2011-03-15 | Tessera International, Inc. | Processing of mosaic images |
US8055067B2 (en) | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
JP5049356B2 (en) | 2007-02-28 | 2012-10-17 | デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド | Separation of directional lighting variability in statistical face modeling based on texture space decomposition |
KR101247147B1 (en) | 2007-03-05 | 2013-03-29 | 디지털옵틱스 코포레이션 유럽 리미티드 | Face searching and detection in a digital image acquisition device |
WO2008109622A1 (en) | 2007-03-05 | 2008-09-12 | Fotonation Vision Limited | Face categorization and annotation of a mobile phone contact list |
WO2008109708A1 (en) | 2007-03-05 | 2008-09-12 | Fotonation Vision Limited | Red eye false positive filtering using face location and orientation |
US7773118B2 (en) | 2007-03-25 | 2010-08-10 | Fotonation Vision Limited | Handheld article with movement discrimination |
JP4714174B2 (en) * | 2007-03-27 | 2011-06-29 | 富士フイルム株式会社 | Imaging device |
WO2008131823A1 (en) | 2007-04-30 | 2008-11-06 | Fotonation Vision Limited | Method and apparatus for automatically controlling the decisive moment for an image acquisition device |
US7916971B2 (en) | 2007-05-24 | 2011-03-29 | Tessera Technologies Ireland Limited | Image processing method and apparatus |
US7999851B2 (en) | 2007-05-24 | 2011-08-16 | Tessera Technologies Ltd. | Optical alignment of cameras with extended depth of field |
US20080309770A1 (en) | 2007-06-18 | 2008-12-18 | Fotonation Vision Limited | Method and apparatus for simulating a camera panning effect |
US8068693B2 (en) | 2007-07-18 | 2011-11-29 | Samsung Electronics Co., Ltd. | Method for constructing a composite image |
US8717412B2 (en) * | 2007-07-18 | 2014-05-06 | Samsung Electronics Co., Ltd. | Panoramic image production |
US8503818B2 (en) | 2007-09-25 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Eye defect detection in international standards organization images |
US8310587B2 (en) | 2007-12-04 | 2012-11-13 | DigitalOptics Corporation International | Compact camera optics |
KR101454609B1 (en) | 2008-01-18 | 2014-10-27 | 디지털옵틱스 코포레이션 유럽 리미티드 | Image processing method and apparatus |
US8750578B2 (en) | 2008-01-29 | 2014-06-10 | DigitalOptics Corporation Europe Limited | Detecting facial expressions in digital images |
US8212864B2 (en) | 2008-01-30 | 2012-07-03 | DigitalOptics Corporation Europe Limited | Methods and apparatuses for using image acquisition data to detect and correct image defects |
US7855737B2 (en) * | 2008-03-26 | 2010-12-21 | Fotonation Ireland Limited | Method of making a digital camera image of a scene including the camera user |
US8520089B2 (en) | 2008-07-30 | 2013-08-27 | DigitalOptics Corporation Europe Limited | Eye beautification |
KR101446975B1 (en) * | 2008-07-30 | 2014-10-06 | 디지털옵틱스 코포레이션 유럽 리미티드 | Automatic face and skin beautification using face detection |
US8081254B2 (en) | 2008-08-14 | 2011-12-20 | DigitalOptics Corporation Europe Limited | In-camera based method of detecting defect eye with high accuracy |
WO2010063463A2 (en) | 2008-12-05 | 2010-06-10 | Fotonation Ireland Limited | Face recognition using face tracker classifier data |
JP5456159B2 (en) | 2009-05-29 | 2014-03-26 | デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド | Method and apparatus for separating the top of the foreground from the background |
US8208746B2 (en) * | 2009-06-29 | 2012-06-26 | DigitalOptics Corporation Europe Limited | Adaptive PSF estimation technique using a sharp preview and a blurred image |
US8379917B2 (en) | 2009-10-02 | 2013-02-19 | DigitalOptics Corporation Europe Limited | Face recognition performance using additional image features |
-
2010
- 2010-12-02 US US12/959,089 patent/US8872887B2/en active Active
- 2010-12-02 US US12/959,151 patent/US8692867B2/en not_active Expired - Fee Related
- 2010-12-02 US US12/959,137 patent/US20110216157A1/en not_active Abandoned
-
2011
- 2011-03-01 WO PCT/EP2011/052970 patent/WO2011107448A2/en active Application Filing
Patent Citations (84)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US1906509A (en) * | 1928-01-17 | 1933-05-02 | Firm Photogrammetrie G M B H | Correction for distortion the component pictures produced from different photographic registering devices |
US3251283A (en) * | 1964-02-11 | 1966-05-17 | Itek Corp | Photographic system |
US3356002A (en) * | 1965-07-14 | 1967-12-05 | Gen Precision Inc | Wide angle optical system |
US4555168A (en) * | 1981-08-24 | 1985-11-26 | Walter Meier | Device for projecting steroscopic, anamorphotically compressed pairs of images on to a spherically curved wide-screen surface |
US5526045A (en) * | 1983-12-29 | 1996-06-11 | Matsushita Electric Industrial Co., Ltd. | Camera apparatus which automatically corrects image fluctuations |
US5000549A (en) * | 1988-09-30 | 1991-03-19 | Canon Kabushiki Kaisha | Zoom lens for stabilizing the image |
US5633756A (en) * | 1991-10-31 | 1997-05-27 | Canon Kabushiki Kaisha | Image stabilizing apparatus |
US5359513A (en) * | 1992-11-25 | 1994-10-25 | Arch Development Corporation | Method and system for detection of interval change in temporally sequential chest images |
US5579169A (en) * | 1993-09-13 | 1996-11-26 | Nikon Corporation | Underwater wide angle lens |
US5585966A (en) * | 1993-12-28 | 1996-12-17 | Nikon Corporation | Zoom lens with vibration reduction function |
US20020063802A1 (en) * | 1994-05-27 | 2002-05-30 | Be Here Corporation | Wide-angle dewarping method and apparatus |
US5675380A (en) * | 1994-12-29 | 1997-10-07 | U.S. Philips Corporation | Device for forming an image and method of correcting geometrical optical distortions in an image |
US5850470A (en) * | 1995-08-30 | 1998-12-15 | Siemens Corporate Research, Inc. | Neural network for locating and recognizing a deformable object |
US6392687B1 (en) * | 1997-05-08 | 2002-05-21 | Be Here Corporation | Method and apparatus for implementing a panoptic camera system |
US6219089B1 (en) * | 1997-05-08 | 2001-04-17 | Be Here Corporation | Method and apparatus for electronically distributing images from a panoptic camera system |
US6466254B1 (en) * | 1997-05-08 | 2002-10-15 | Be Here Corporation | Method and apparatus for electronically distributing motion panoramic images |
US5960108A (en) * | 1997-06-12 | 1999-09-28 | Apple Computer, Inc. | Method and system for creating an image-based virtual reality environment utilizing a fisheye lens |
US6044181A (en) * | 1997-08-01 | 2000-03-28 | Microsoft Corporation | Focal length estimation method and apparatus for construction of panoramic mosaic images |
US6078701A (en) * | 1997-08-01 | 2000-06-20 | Sarnoff Corporation | Method and apparatus for performing local to global multiframe alignment to construct mosaic images |
US5986668A (en) * | 1997-08-01 | 1999-11-16 | Microsoft Corporation | Deghosting method and apparatus for construction of image mosaics |
US6750903B1 (en) * | 1998-03-05 | 2004-06-15 | Hitachi, Ltd. | Super high resolution camera |
US20020114536A1 (en) * | 1998-09-25 | 2002-08-22 | Yalin Xiong | Aligning rectilinear images in 3D through projective registration and calibration |
US6222683B1 (en) * | 1999-01-13 | 2001-04-24 | Be Here Corporation | Panoramic imaging arrangement |
US20040233461A1 (en) * | 1999-11-12 | 2004-11-25 | Armstrong Brian S. | Methods and apparatus for measuring orientation and distance |
US6664956B1 (en) * | 2000-10-12 | 2003-12-16 | Momentum Bilgisayar, Yazilim, Danismanlik, Ticaret A. S. | Method for generating a personalized 3-D face model |
US7907793B1 (en) * | 2001-05-04 | 2011-03-15 | Legend Films Inc. | Image sequence depth enhancement system and method |
US20030103063A1 (en) * | 2001-12-03 | 2003-06-05 | Tempest Microsystems | Panoramic imaging and display system with canonical magnifier |
US7327899B2 (en) * | 2002-06-28 | 2008-02-05 | Microsoft Corp. | System and method for head size equalization in 360 degree panoramic images |
US7058237B2 (en) * | 2002-06-28 | 2006-06-06 | Microsoft Corporation | Real-time wide-angle image correction system and method for computer image viewing |
US20040061787A1 (en) * | 2002-09-30 | 2004-04-01 | Zicheng Liu | Foveated wide-angle imaging system and method for capturing and viewing wide-angle images in real time |
US20100305869A1 (en) * | 2003-08-01 | 2010-12-02 | Dexcom, Inc. | Transcutaneous analyte sensor |
US8000901B2 (en) * | 2003-08-01 | 2011-08-16 | Dexcom, Inc. | Transcutaneous analyte sensor |
US7499638B2 (en) * | 2003-08-28 | 2009-03-03 | Olympus Corporation | Object recognition apparatus |
US20050166054A1 (en) * | 2003-12-17 | 2005-07-28 | Yuji Fujimoto | Data processing apparatus and method and encoding device of same |
US20100066822A1 (en) * | 2004-01-22 | 2010-03-18 | Fotonation Ireland Limited | Classification and organization of consumer digital images using workflow, and face detection and recognition |
US20100014721A1 (en) * | 2004-01-22 | 2010-01-21 | Fotonation Ireland Limited | Classification System for Consumer Digital Images using Automatic Workflow and Face Detection and Recognition |
US20050169529A1 (en) * | 2004-02-03 | 2005-08-04 | Yuri Owechko | Active learning system for object fingerprinting |
US20100002071A1 (en) * | 2004-04-30 | 2010-01-07 | Grandeye Ltd. | Multiple View and Multiple Object Processing in Wide-Angle Video Camera |
US20060093238A1 (en) * | 2004-10-28 | 2006-05-04 | Eran Steinberg | Method and apparatus for red-eye detection in an acquired digital image using face recognition |
US7609850B2 (en) * | 2004-12-09 | 2009-10-27 | Sony United Kingdom Limited | Data processing apparatus and method |
US20060140449A1 (en) * | 2004-12-27 | 2006-06-29 | Hitachi, Ltd. | Apparatus and method for detecting vehicle |
US7280289B2 (en) * | 2005-02-21 | 2007-10-09 | Fujinon Corporation | Wide angle imaging lens |
US7613357B2 (en) * | 2005-09-20 | 2009-11-03 | Gm Global Technology Operations, Inc. | Method for warped image object recognition |
US7843652B2 (en) * | 2005-10-21 | 2010-11-30 | Fujinon Corporation | Wide-angle imaging lens |
US7495845B2 (en) * | 2005-10-21 | 2009-02-24 | Fujinon Corporation | Wide-angle imaging lens |
US20070172150A1 (en) * | 2006-01-19 | 2007-07-26 | Shuxue Quan | Hand jitter reduction compensating for rotational motion |
US20070206941A1 (en) * | 2006-03-03 | 2007-09-06 | Atsushi Maruyama | Imaging apparatus and imaging method |
US7929221B2 (en) * | 2006-04-10 | 2011-04-19 | Alex Ning | Ultra-wide angle objective lens |
US20090074323A1 (en) * | 2006-05-01 | 2009-03-19 | Nikon Corporation | Image processing method, carrier medium carrying image processing program, image processing apparatus, and imaging apparatus |
US8094183B2 (en) * | 2006-08-11 | 2012-01-10 | Funai Electric Co., Ltd. | Panoramic imaging device |
US20080075352A1 (en) * | 2006-09-27 | 2008-03-27 | Hisae Shibuya | Defect classification method and apparatus, and defect inspection apparatus |
US7612946B2 (en) * | 2006-10-24 | 2009-11-03 | Nanophotonics Co., Ltd. | Wide-angle lenses |
US20100046837A1 (en) * | 2006-11-21 | 2010-02-25 | Koninklijke Philips Electronics N.V. | Generation of depth map for an image |
US8090148B2 (en) * | 2007-01-24 | 2012-01-03 | Sanyo Electric Co., Ltd. | Image processor, vehicle, and image processing method |
US20080175436A1 (en) * | 2007-01-24 | 2008-07-24 | Sanyo Electric Co., Ltd. | Image processor, vehicle, and image processing method |
US20100303381A1 (en) * | 2007-05-15 | 2010-12-02 | Koninklijke Philips Electronics N.V. | Imaging system and imaging method for imaging a region of interest |
US7848548B1 (en) * | 2007-06-11 | 2010-12-07 | Videomining Corporation | Method and system for robust demographic classification using pose independent model from sequence of face images |
US7835071B2 (en) * | 2007-09-10 | 2010-11-16 | Sumitomo Electric Industries, Ltd. | Far-infrared camera lens, lens unit, and imaging apparatus |
US8144033B2 (en) * | 2007-09-26 | 2012-03-27 | Nissan Motor Co., Ltd. | Vehicle periphery monitoring apparatus and image displaying method |
US20100215251A1 (en) * | 2007-10-11 | 2010-08-26 | Koninklijke Philips Electronics N.V. | Method and device for processing a depth-map |
US8379014B2 (en) * | 2007-10-11 | 2013-02-19 | Mvtec Software Gmbh | System and method for 3D object recognition |
US20090310828A1 (en) * | 2007-10-12 | 2009-12-17 | The University Of Houston System | An automated method for human face modeling and relighting with application to face recognition |
US20090180713A1 (en) * | 2008-01-10 | 2009-07-16 | Samsung Electronics Co., Ltd | Method and system of adaptive reformatting of digital image |
US8311344B2 (en) * | 2008-02-15 | 2012-11-13 | Digitalsmiths, Inc. | Systems and methods for semantically classifying shots in video |
US20090220156A1 (en) * | 2008-02-29 | 2009-09-03 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, program, and storage medium |
US20110002071A1 (en) * | 2008-03-06 | 2011-01-06 | Keqing Zhang | Leakage protective plug |
US8134479B2 (en) * | 2008-03-27 | 2012-03-13 | Mando Corporation | Monocular motion stereo-based free parking space detection apparatus and method |
US20100033551A1 (en) * | 2008-08-08 | 2010-02-11 | Adobe Systems Incorporated | Content-Aware Wide-Angle Images |
US8194993B1 (en) * | 2008-08-29 | 2012-06-05 | Adobe Systems Incorporated | Method and apparatus for matching image metadata to a profile database to determine image processing parameters |
US8340453B1 (en) * | 2008-08-29 | 2012-12-25 | Adobe Systems Incorporated | Metadata-driven method and apparatus for constraining solution space in image processing techniques |
US8264524B1 (en) * | 2008-09-17 | 2012-09-11 | Grandeye Limited | System for streaming multiple regions deriving from a wide-angle camera |
US20100166300A1 (en) * | 2008-12-31 | 2010-07-01 | Stmicroelectronics S.R.I. | Method of generating motion vectors of images of a video sequence |
US20110298795A1 (en) * | 2009-02-18 | 2011-12-08 | Koninklijke Philips Electronics N.V. | Transferring of 3d viewer metadata |
US20110085049A1 (en) * | 2009-10-14 | 2011-04-14 | Zoran Corporation | Method and apparatus for image stabilization |
US20110116720A1 (en) * | 2009-11-17 | 2011-05-19 | Samsung Electronics Co., Ltd. | Method and apparatus for image processing |
US20110216156A1 (en) * | 2010-03-05 | 2011-09-08 | Tessera Technologies Ireland Limited | Object Detection and Rendering for Wide Field of View (WFOV) Image Acquisition Systems |
US20110216158A1 (en) * | 2010-03-05 | 2011-09-08 | Tessera Technologies Ireland Limited | Object Detection and Rendering for Wide Field of View (WFOV) Image Acquisition Systems |
US20120249725A1 (en) * | 2011-03-31 | 2012-10-04 | Tessera Technologies Ireland Limited | Face and other object tracking in off-center peripheral regions for nonlinear lens geometries |
US20120250937A1 (en) * | 2011-03-31 | 2012-10-04 | Tessera Technologies Ireland Limited | Scene enhancements in off-center peripheral regions for nonlinear lens geometries |
US20120249727A1 (en) * | 2011-03-31 | 2012-10-04 | Tessera Technologies Ireland Limited | Superresolution enhancment of peripheral regions in nonlinear lens geometries |
US20120249726A1 (en) * | 2011-03-31 | 2012-10-04 | Tessera Technologies Ireland Limited | Face and other object detection and tracking in off-center peripheral regions for nonlinear lens geometries |
US20120249841A1 (en) * | 2011-03-31 | 2012-10-04 | Tessera Technologies Ireland Limited | Scene enhancements in off-center peripheral regions for nonlinear lens geometries |
US8493459B2 (en) * | 2011-09-15 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Registration of distorted images |
US8493460B2 (en) * | 2011-09-15 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Registration of differently scaled images |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110216156A1 (en) * | 2010-03-05 | 2011-09-08 | Tessera Technologies Ireland Limited | Object Detection and Rendering for Wide Field of View (WFOV) Image Acquisition Systems |
US8692867B2 (en) | 2010-03-05 | 2014-04-08 | DigitalOptics Corporation Europe Limited | Object detection and rendering for wide field of view (WFOV) image acquisition systems |
US8872887B2 (en) | 2010-03-05 | 2014-10-28 | Fotonation Limited | Object detection and rendering for wide field of view (WFOV) image acquisition systems |
US8723959B2 (en) | 2011-03-31 | 2014-05-13 | DigitalOptics Corporation Europe Limited | Face and other object tracking in off-center peripheral regions for nonlinear lens geometries |
US20130307922A1 (en) * | 2012-05-17 | 2013-11-21 | Hong-Long Chou | Image pickup device and image synthesis method thereof |
US8953013B2 (en) * | 2012-05-17 | 2015-02-10 | Altek Corporation | Image pickup device and image synthesis method thereof |
US9784943B1 (en) | 2014-03-16 | 2017-10-10 | Navitar Industries, Llc | Optical assembly for a wide field of view point action camera with a low sag aspheric lens element |
US10139599B1 (en) | 2014-03-16 | 2018-11-27 | Navitar Industries, Llc | Optical assembly for a wide field of view camera with low TV distortion |
US9316820B1 (en) | 2014-03-16 | 2016-04-19 | Hyperion Development, LLC | Optical assembly for a wide field of view point action camera with low astigmatism |
US9494772B1 (en) | 2014-03-16 | 2016-11-15 | Hyperion Development, LLC | Optical assembly for a wide field of view point action camera with low field curvature |
US9726859B1 (en) | 2014-03-16 | 2017-08-08 | Navitar Industries, Llc | Optical assembly for a wide field of view camera with low TV distortion |
US9778444B1 (en) | 2014-03-16 | 2017-10-03 | Navitar Industries, Llc | Optical assembly for a wide field of view point action camera with low astigmatism |
US9091843B1 (en) | 2014-03-16 | 2015-07-28 | Hyperion Development, LLC | Optical assembly for a wide field of view point action camera with low track length to focal length ratio |
US9995910B1 (en) | 2014-03-16 | 2018-06-12 | Navitar Industries, Llc | Optical assembly for a compact wide field of view digital camera with high MTF |
US10107989B1 (en) | 2014-03-16 | 2018-10-23 | Navitar Industries, Llc | Optical assembly for a wide field of view point action camera with low field curvature |
US9316808B1 (en) | 2014-03-16 | 2016-04-19 | Hyperion Development, LLC | Optical assembly for a wide field of view point action camera with a low sag aspheric lens element |
US10139595B1 (en) | 2014-03-16 | 2018-11-27 | Navitar Industries, Llc | Optical assembly for a compact wide field of view digital camera with low first lens diameter to image diagonal ratio |
US10317652B1 (en) | 2014-03-16 | 2019-06-11 | Navitar Industries, Llc | Optical assembly for a wide field of view point action camera with low astigmatism |
US10386604B1 (en) | 2014-03-16 | 2019-08-20 | Navitar Industries, Llc | Compact wide field of view digital camera with stray light impact suppression |
US10545314B1 (en) | 2014-03-16 | 2020-01-28 | Navitar Industries, Llc | Optical assembly for a compact wide field of view digital camera with low lateral chromatic aberration |
US10545313B1 (en) | 2014-03-16 | 2020-01-28 | Navitar Industries, Llc | Optical assembly for a wide field of view point action camera with a low sag aspheric lens element |
US10739561B1 (en) | 2014-03-16 | 2020-08-11 | Navitar Industries, Llc | Optical assembly for a compact wide field of view digital camera with high MTF |
US10746967B2 (en) | 2014-03-16 | 2020-08-18 | Navitar Industries, Llc | Optical assembly for a wide field of view point action camera with low field curvature |
US11754809B2 (en) | 2014-03-16 | 2023-09-12 | Navitar, Inc. | Optical assembly for a wide field of view point action camera with low field curvature |
US20220224877A1 (en) * | 2017-04-01 | 2022-07-14 | Intel Corporation | Barreling and compositing of images |
US11800083B2 (en) * | 2017-04-01 | 2023-10-24 | Intel Corporation | Barreling and compositing of images |
Also Published As
Publication number | Publication date |
---|---|
US20110216158A1 (en) | 2011-09-08 |
US8872887B2 (en) | 2014-10-28 |
US8692867B2 (en) | 2014-04-08 |
WO2011107448A3 (en) | 2011-11-17 |
US20110216156A1 (en) | 2011-09-08 |
WO2011107448A2 (en) | 2011-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8692867B2 (en) | Object detection and rendering for wide field of view (WFOV) image acquisition systems | |
CN107925751B (en) | System and method for multiple views noise reduction and high dynamic range | |
US9325899B1 (en) | Image capturing device and digital zooming method thereof | |
US8860816B2 (en) | Scene enhancements in off-center peripheral regions for nonlinear lens geometries | |
US8982180B2 (en) | Face and other object detection and tracking in off-center peripheral regions for nonlinear lens geometries | |
US8896703B2 (en) | Superresolution enhancment of peripheral regions in nonlinear lens geometries | |
US8723959B2 (en) | Face and other object tracking in off-center peripheral regions for nonlinear lens geometries | |
US8189960B2 (en) | Image processing apparatus, image processing method, program and recording medium | |
US9961272B2 (en) | Image capturing apparatus and method of controlling the same | |
CN104363385B (en) | Line-oriented hardware implementing method for image fusion | |
JP2013009050A (en) | Image processing apparatus and image processing method | |
CN109166076B (en) | Multi-camera splicing brightness adjusting method and device and portable terminal | |
CN111866523B (en) | Panoramic video synthesis method and device, electronic equipment and computer storage medium | |
US20120081560A1 (en) | Digital photographing apparatus and method of controlling the same | |
US20130129221A1 (en) | Image processing device, image processing method, and recording medium | |
KR20160137289A (en) | Photographing apparatus and method for controlling the same | |
CN110784642B (en) | Image processing apparatus, control method thereof, storage medium, and imaging apparatus | |
WO2017092261A1 (en) | Camera module, mobile terminal, and image shooting method and apparatus therefor | |
JP6379812B2 (en) | Image processing system | |
US20220172318A1 (en) | Method for capturing and processing a digital panoramic image | |
CN113016002A (en) | Selective distortion or distortion correction in images from cameras with wide-angle lenses | |
JP7458769B2 (en) | Image processing device, imaging device, image processing method, program and recording medium | |
KR102052725B1 (en) | Method and apparatus for generating virtual reality image inside the vehicle by using image stitching technique | |
Zhu et al. | Expanding a fish-eye panoramic image through perspective transformation | |
JP2015119436A (en) | Imaging apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TESSERA TECHNOLOGIES IRELAND LIMITED, IRELAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BIGIOI, PETRONEL;DRIMBAREAN, ALEXANDRU;STEC, PIOTR;AND OTHERS;SIGNING DATES FROM 20110113 TO 20110118;REEL/FRAME:026167/0014 |
|
AS | Assignment |
Owner name: DIGITALOPTICS CORPORATION EUROPE LIMITED, IRELAND Free format text: CHANGE OF NAME;ASSIGNOR:TESSERA TECHNOLOGIES IRELAND LIMITED;REEL/FRAME:028593/0661 Effective date: 20110713 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |