US10097766B2 - Provision of exposure times for a multi-exposure image - Google Patents
Provision of exposure times for a multi-exposure image Download PDFInfo
- Publication number
- US10097766B2 US10097766B2 US15/253,623 US201615253623A US10097766B2 US 10097766 B2 US10097766 B2 US 10097766B2 US 201615253623 A US201615253623 A US 201615253623A US 10097766 B2 US10097766 B2 US 10097766B2
- Authority
- US
- United States
- Prior art keywords
- exposure
- images
- pixels
- image
- image sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004458 analytical method Methods 0.000 claims abstract description 47
- 238000000034 method Methods 0.000 claims description 23
- 238000001514 detection method Methods 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000001413 cellular effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000005055 memory storage Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000007177 brain activity Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005684 electric field Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
Images
Classifications
-
- H04N5/2353—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/71—Circuitry for evaluating the brightness variation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/743—Bracketing, i.e. taking a series of images with varying exposure conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/44—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
- H04N25/441—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array by reading contiguous pixels from selected rows or columns of the array, e.g. interlaced scanning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/53—Control of the integration time
- H04N25/533—Control of the integration time by using differing integration times for different sensor regions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/57—Control of the dynamic range
- H04N25/58—Control of the dynamic range involving two or more exposures
- H04N25/587—Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields
- H04N25/589—Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields with different integration times, e.g. short and long exposures
-
- H04N5/23219—
-
- H04N5/23222—
-
- H04N5/2351—
-
- H04N5/2355—
-
- H04N5/2356—
-
- H04N5/3452—
-
- H04N5/3535—
-
- H04N5/35581—
Definitions
- HDR High Dynamic Range
- an apparatus comprising at least one processing unit and at least one memory.
- the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to control a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and to control a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
- an apparatus comprising a camera comprising an image sensor configured to capture image data of a scene, the image sensor having at least two sets of pixels enabling different exposure times, a viewfinder configured to display image data captured with the image sensor, and a controller configured to control a first set of pixels of the image sensor for exposure for the viewfinder and a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
- a method comprises controlling, by at least one processor, a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and controlling, by the at least one processor, a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
- FIG. 1 is a system diagram depicting an apparatus including a variety of optional hardware and software components.
- FIG. 2A illustrates pixels of an image sensor according to one embodiment.
- FIG. 2B illustrates a block diagram for controlling an image sensor and a viewfinder display with a controller according to one embodiment.
- FIG. 3 illustrates a flow diagram of a method for controlling an image sensor according to one embodiment.
- FIG. 4 illustrates a flow diagram of a method for controlling an image sensor according to one embodiment.
- multi-exposure image refers to an image which constructs a single image from multiple images obtained using different exposure times.
- one possibility is to use the auto exposure from a viewfinder and to use preset longer and shorter exposures relative to a base image. This will not, however, provide optimal results in many cases.
- Another possibility is to vary the exposure time of the viewfinder stream. This, however, will cause the visible viewfinder stream to change brightness as different exposure times are set. Thus serious implications for the user experience are caused as the brightness of the viewfinder will change dramatically.
- a solution is provided where exposure times of a multi-exposure image are customized for each scene. Further, user experience is maintained as the separate set of pixels is used for exposure of a scene for a camera viewfinder and there are no significant changes in the brightness of the viewfinder. Further, in at least some embodiments, the number of images needed to be taken for the multi-exposure image can be optimized.
- a first set of pixels of an image sensor is controlled for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times.
- a second set of pixels of the image sensor is controlled for exposure analysis of the scene for images to be captured for a multi-exposure image.
- FIG. 1 is a system diagram depicting an apparatus 100 including a variety of optional hardware and software components. Any components in the apparatus 100 can communicate with any other component, although not all connections are shown, for ease of illustration.
- the apparatus 100 can be any of a variety of computing devices (for example, a cell phone, a smartphone, a handheld computer, a tablet computer, a laptop computer, a personal computer, a Personal Digital Assistant (PDA), a digital camera etc.).
- PDA Personal Digital Assistant
- the illustrated apparatus 100 can include a controller or processor 102 (e.g., signal processor, microprocessor, ASIC, or other control and processing logic circuitry) for performing such tasks as signal coding, data processing, input/output processing, power control, and/or other functions.
- An operating system 104 can control the allocation and usage of the components and support for one or more application programs 134 .
- the application programs can include common mobile computing applications (e.g., email applications, calendars, contact managers, web browsers, messaging applications), or any other computing application.
- the illustrated apparatus 100 can include a memory 106 .
- the memory 106 can include non-removable memory 108 and/or removable memory 110 .
- the non-removable memory 108 can include RAM, ROM, flash memory, a hard disk, or other well-known memory storage technologies.
- the removable memory 110 can include flash memory or a Subscriber Identity Module (SIM) card, which is well known in mobile communication systems, or other well-known memory storage technologies, such as “smart cards”.
- SIM Subscriber Identity Module
- the memory 106 can be used for storing data and/or code for running the operating system 104 and the applications 134 .
- the memory 106 can be used to store a subscriber identifier, such as an International Mobile Subscriber Identity (IMSI), and an equipment identifier, such as an International Mobile Equipment Identifier (IMEI).
- IMSI International Mobile Subscriber Identity
- IMEI International Mobile Equipment Identifier
- Such identifiers can be transmitted to a network server to identify users and equipment.
- the apparatus 100 can support one or more input devices 112 , such as a touchscreen 114 , microphone 116 , camera 118 and/or physical keys or a keyboard 120 and one or more output devices 122 , such as a speaker 124 and a display 126 . Some devices can serve more than one input/output function. For example, the touchscreen 114 and the display 126 can be combined in a single input/output device.
- the input devices 112 can include a Natural User Interface (NUI).
- NUI is any interface technology that enables a user to interact with a device in a “natural” manner, free from artificial constraints imposed by input devices such as mice, keyboards, remote controls, and the like.
- NUI methods include those relying on speech recognition, touch and stylus recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, voice and speech, vision, touch, gestures, and machine intelligence.
- Other examples of a NUI include motion gesture detection using accelerometers/gyroscopes, facial recognition, 3D displays, head, eye, and gaze tracking, immersive augmented reality and virtual reality systems, all of which provide a more natural interface, as well as technologies for sensing brain activity using electric field sensing electrodes (EEG and related methods).
- the operating system 104 or applications 134 can comprise speech-recognition software as part of a voice user interface that allows a user to operate the apparatus 100 via voice commands.
- the apparatus 100 can comprise input devices and software that allows for user interaction via a user's spatial gestures, such as detecting and interpreting gestures to provide input to a gaming application.
- a wireless modem 128 can be coupled to an antenna (not shown) and can support two-way communications between the processor 102 and external devices, as is well understood in the art.
- the modem 128 is shown generically and can include a cellular modem for communicating with the mobile communication network and/or other radio-based modems (e.g., Bluetooth or Wi-Fi).
- the wireless modem 128 is typically configured for communication with one or more cellular networks, such as a GSM network for data and voice communications within a single cellular network, a WCDMA (Wideband Code Division Multiple Access) network, an LTE (Long Term Evolution) network, a 4G LTE network, between cellular networks, or between the mobile apparatus and a public switched telephone network (PSTN) etc.
- GSM Global System for Mobile communications
- WCDMA Wideband Code Division Multiple Access
- LTE Long Term Evolution
- 4G LTE 4G LTE network
- the apparatus 100 can further include at least one input/output port 130 and/or a physical connector 132 , which can be a USB port, a USB-C port, IEEE 1394 (FireWire) port, and/or RS-232 port.
- a physical connector 132 can be a USB port, a USB-C port, IEEE 1394 (FireWire) port, and/or RS-232 port.
- the illustrated components are not required or all-inclusive, as any components can be deleted and other components can be added.
- At least one of the processor 102 and the memory 106 may constitute means for controlling a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and means for controlling a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
- FIG. 2A illustrates pixels of an image sensor 200 according to one embodiment.
- the image sensor 200 comprises pixels 202 .
- Filled pixels 204 represent a first set of pixels and non-filled pixels 206 represent a second set of pixels.
- the first and second pixels sets 204 , 206 are illustrated as odd and even rows, in another embodiment the first and second pixel sets 204 , 206 may be arranged in a chessboard pattern where every other pixel belongs to the first pixel set 204 and every other to the second pixel set 206 .
- the image sensor 200 is able to vary exposure time of the pixels 200 so that the first pixel set 204 is exposed using a first exposure time and the second pixel set 206 is exposed using a second exposure time.
- the image sensor 200 is a sensor supporting an interlaced exposure.
- the two pixel sets are a pair of rows.
- FIG. 2B illustrates a block diagram for controlling an image sensor 200 and a viewfinder display 210 with a controller 208 according to one embodiment.
- FIG. 3 illustrates a flow diagram of a method for controlling an image sensor according to one embodiment.
- the image sensor may be the image sensor 200 illustrated in FIG. 2A and FIG. 2B .
- the first set of pixels 204 of the image sensor 200 is controlled by the controller 208 for exposure of a scene for a camera viewfinder.
- the first set of pixels 204 is exposed as a regular camera viewfinder and shown on a viewfinder screen.
- An image stream of a scene to be photographed is displayed on the viewfinder display 210 .
- the first set of pixels is used for exposure of a scene only for a camera viewfinder, there are no significant changes in the brightness of the viewfinder.
- the second set of pixels of the image sensor 200 is controlled by the controller 208 for exposure analysis of the scene for images to be captured for a multi-exposure image.
- the first and second sets of pixels 204 , 206 are used simultaneously for different purposes.
- the viewfinder image is not affected as it is provided by the first set of pixels 204 .
- the controller 208 may refer to at least one processor connected to at least one memory, and the at least one memory may store program instructions that, when executed by the at least one processor, cause the performance of the steps 300 and 302 .
- exposure times of the multi-exposure image are customized for each scene while maintaining user experience. Further, in some embodiments, this enables a solution where the number of images to be taken for the multi-exposure image varies depending on the scene.
- Exposure analysis may refer to one or more analysis steps that enable determining exposure times for the images to be captured for the multi-exposure image. For example, the exposure analysis may analyze light intensity of different areas of the scene. Further, the exposure analysis may also determine how many images are needed for the multi-capture image. The exposure analysis may use various image processing routines.
- FIG. 4 illustrates a flow diagram of a method for controlling an image sensor according to one embodiment.
- the image sensor may be the image sensor 200 illustrated in FIG. 2A and FIG. 2B .
- the first set of pixels 204 of the image sensor 200 is controlled for exposure of a scene for a camera viewfinder.
- the first set of pixels 204 is exposed as a regular camera viewfinder and shown on a viewfinder screen.
- An image stream of a scene to be photographed is displayed on the viewfinder display.
- a series of images with different exposure times is caused to be captured with the second set of pixels 206 .
- the purpose of the second set of pixels 206 is to enable analyzation of the light in the scene.
- the captured series of images is analyzed to determine light intensity of different areas of the scene. In other words, how well exposed different areas are in the scene.
- the data from the analysis is used to determine exposure times for the images to be captured in order to produce the final multi-exposure image. If at 408 the end results of the analysis is that only a single image is needed (i.e. no multi-exposure image is needed), at 410 a single exposure time is used for a single image without multi-exposure. On the other hand, if at 408 the end result is that multiple exposure times are needed, at 412 it may be determined whether a flash is needed in any of the images. In some embodiments, it may be that some parts of the scene can be illuminated only using the flash. If no flash is needed, at 416 the determined exposure times are used for the images to be captured in order to produce the multi-exposure image. If flash is needed in at least one of the images, flash control information 414 is provided in order to synchronize the flash with the exposure times.
- an apparatus may comprise two separate image sensors, and the controller 208 is configured to control a first image sensor for exposure of a scene for a camera viewfinder. Simultaneously, the controller 208 is configured to control a second image sensor having at least two sets of pixels enabling different exposure times. The controller 208 is configured to control a first set and a second set of image pixels of the second image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image. When two separate image sensors are used, it is possible to use different exposure times with the first and second sets of pixels and thus to shorten the time needed for determining the final exposure times for the images to be captured for the multi-exposure image.
- the number of images to be captured for the multi-exposure image may be adaptable. A more difficult scene in terms of lighting may require more images to be taken in order to produce the final multi-exposure image than a scene having a different type of lighting conditions. Further, in some embodiments, if a face or faces is detected in the scene, this may cause the analysis to keep the number of images to be captured low or lower than in a situation where there are no faces detected. This may be due to the fact that the face provides an indication that movement is probable in the images to be captured. Further, in some embodiments, movement may be detected based on the captured series of images, and information relating to the movement may be used in determining exposure times and/or the number of images to be captured for the multi-exposure image.
- exposure times of the multi-exposure image are customized for each scene while maintaining user experience. Further, in some embodiments, this enables a solution where the number of images to be taken for the multi-exposure image varies depending on the scene.
- an apparatus comprising at least one processing unit and at least one memory.
- the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to control a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and control a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
- the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to cause capture of a series of images with different exposure times with the second set of pixels, analyze the series of images to determine light intensity of different areas of the scene, and determine exposure times for the images to be captured for the multi-exposure image based on the analysis.
- the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to determine based on the analysis whether flash is to be used in any of the images to be captured for the multi-exposure image.
- the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to detect a face in the series of images, and limit the number of the images to be captured for the multi-exposure image in response to the detection.
- the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to adapt, based on the analysis, the number of images to be captured for the multi-exposure image.
- the apparatus is implemented as one of a mobile phone, a mobile device or a digital camera.
- the image sensor supports interlaced exposure.
- an apparatus comprising a camera comprising an image sensor configured to capture image data of a scene, the image sensor having at least two sets of pixels enabling different exposure times, a viewfinder configured to display image data captured with the image sensor, and a controller configured to control a first set of pixels of the image sensor for exposure for the viewfinder and a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
- the controller configured to cause capture of a series of images with different exposure times with the second set of pixels, analyze the series of images to determine light intensity of different areas of the scene, and determine exposure times for the images to be captured for the multi-exposure image based on the analysis.
- the controller is configured to determine based on the analysis whether flash is to be used in the images to be captured for the multi-exposure image.
- the controller is configured to adapt, based on the analysis, the number of images to be captured for the multi-exposure image.
- the controller is configured to detect a face in the series of images, and limit the number of the images to be captured for the multi-exposure image in response to the detection.
- the apparatus is implemented as one of a mobile phone, a mobile device or a digital camera.
- the image sensor supports interlaced exposure.
- a method comprising controlling, by at least one processor, a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and controlling, by the at least one processor, a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
- the method further comprises causing, by the at least one processor, capture of a series of images with different exposure times with the second set of pixels, analyzing, by the at least one processor, the series of images to determine light intensity of different areas of the scene, and determining, by the at least one processor, exposure times for the images to be captured for the multi-exposure image based on the analysis.
- the method further comprises determining, by the at least one processor, based on the analysis whether flash is to be used in any of the images to be captured for the multi-exposure image.
- the method further comprises adapting, by the at least one processor, based on the analysis the number of images to be captured for the multi-exposure image.
- the method further comprises detecting, by the at least one processor, a face in the series of images, and limiting, by the at least one processor, the number of the images to be captured for the multi-exposure image in response to the detection.
- the image sensor supports interlaced exposure.
- a computer program comprising program code, which when executed by at least one processor, causes an apparatus to control a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and control a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
- a computer-readable medium comprising a computer program comprising program code, which when executed by at least one processor, causes an apparatus to control a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and control a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
- an apparatus comprising means for controlling a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and means for controlling a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
- the functionality described herein can be performed, at least in part, by one or more hardware logic components.
- illustrative types of hardware logic components include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), Graphics Processing Units (GPUs).
- the functions described herein performed by a controller or a processor may be performed by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium.
- tangible storage media include disks, thumb drives, memory etc. and do not include propagated signals.
- the software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
According to one aspect, there is provided an apparatus comprising at least one processing unit and at least one memory. The at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to control a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and control a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
Description
In modern cameras, it is common to take multi-exposure images, for example, High Dynamic Range (HDR) images. To get optimal quality in such cases it is important to set the exposure times in a way that they make a large part of a scene as well exposed as possible in one or more of the images. Further, as image capturing locations and their circumstances are unique, this creates challenges for setting the exposure times correctly.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In one embodiment, an apparatus is provided. The apparatus comprises at least one processing unit and at least one memory. The at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to control a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and to control a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
In another embodiment, an apparatus is provided. The apparatus comprises a camera comprising an image sensor configured to capture image data of a scene, the image sensor having at least two sets of pixels enabling different exposure times, a viewfinder configured to display image data captured with the image sensor, and a controller configured to control a first set of pixels of the image sensor for exposure for the viewfinder and a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
In another embodiment, a method is provided. The method comprises controlling, by at least one processor, a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and controlling, by the at least one processor, a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
Many of the attendant features will be more readily appreciated as they become better understood by reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
Like reference numerals are used to designate like parts in the accompanying drawings.
The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. However, the same or equivalent functions and sequences may be accomplished by different examples. Furthermore, as used in this application and in the claims, the singular forms “a,” “an,” and “the” include the plural forms unless the context clearly dictates otherwise. Additionally, the term “includes” means “comprises.” Further, the term “coupled” encompasses mechanical, electrical, magnetic, optical, as well as other practical ways of coupling or linking items together, and does not exclude the presence of intermediate elements between the coupled items.
In the following description, the term “multi-exposure image” refers to an image which constructs a single image from multiple images obtained using different exposure times.
When taking multi-exposure images, one possibility is to use the auto exposure from a viewfinder and to use preset longer and shorter exposures relative to a base image. This will not, however, provide optimal results in many cases. Another possibility is to vary the exposure time of the viewfinder stream. This, however, will cause the visible viewfinder stream to change brightness as different exposure times are set. Thus serious implications for the user experience are caused as the brightness of the viewfinder will change dramatically.
In at least some embodiments a solution is provided where exposure times of a multi-exposure image are customized for each scene. Further, user experience is maintained as the separate set of pixels is used for exposure of a scene for a camera viewfinder and there are no significant changes in the brightness of the viewfinder. Further, in at least some embodiments, the number of images needed to be taken for the multi-exposure image can be optimized.
In at least some embodiments, a first set of pixels of an image sensor is controlled for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times. At the same time, a second set of pixels of the image sensor is controlled for exposure analysis of the scene for images to be captured for a multi-exposure image.
The illustrated apparatus 100 can include a controller or processor 102 (e.g., signal processor, microprocessor, ASIC, or other control and processing logic circuitry) for performing such tasks as signal coding, data processing, input/output processing, power control, and/or other functions. An operating system 104 can control the allocation and usage of the components and support for one or more application programs 134. The application programs can include common mobile computing applications (e.g., email applications, calendars, contact managers, web browsers, messaging applications), or any other computing application.
The illustrated apparatus 100 can include a memory 106. The memory 106 can include non-removable memory 108 and/or removable memory 110. The non-removable memory 108 can include RAM, ROM, flash memory, a hard disk, or other well-known memory storage technologies. The removable memory 110 can include flash memory or a Subscriber Identity Module (SIM) card, which is well known in mobile communication systems, or other well-known memory storage technologies, such as “smart cards”. The memory 106 can be used for storing data and/or code for running the operating system 104 and the applications 134. If the apparatus 100 is a mobile phone or smart phone, the memory 106 can be used to store a subscriber identifier, such as an International Mobile Subscriber Identity (IMSI), and an equipment identifier, such as an International Mobile Equipment Identifier (IMEI). Such identifiers can be transmitted to a network server to identify users and equipment.
The apparatus 100 can support one or more input devices 112, such as a touchscreen 114, microphone 116, camera 118 and/or physical keys or a keyboard 120 and one or more output devices 122, such as a speaker 124 and a display 126. Some devices can serve more than one input/output function. For example, the touchscreen 114 and the display 126 can be combined in a single input/output device. The input devices 112 can include a Natural User Interface (NUI). An NUI is any interface technology that enables a user to interact with a device in a “natural” manner, free from artificial constraints imposed by input devices such as mice, keyboards, remote controls, and the like. Examples of NUI methods include those relying on speech recognition, touch and stylus recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, voice and speech, vision, touch, gestures, and machine intelligence. Other examples of a NUI include motion gesture detection using accelerometers/gyroscopes, facial recognition, 3D displays, head, eye, and gaze tracking, immersive augmented reality and virtual reality systems, all of which provide a more natural interface, as well as technologies for sensing brain activity using electric field sensing electrodes (EEG and related methods). Thus, in one specific example, the operating system 104 or applications 134 can comprise speech-recognition software as part of a voice user interface that allows a user to operate the apparatus 100 via voice commands. Further, the apparatus 100 can comprise input devices and software that allows for user interaction via a user's spatial gestures, such as detecting and interpreting gestures to provide input to a gaming application.
A wireless modem 128 can be coupled to an antenna (not shown) and can support two-way communications between the processor 102 and external devices, as is well understood in the art. The modem 128 is shown generically and can include a cellular modem for communicating with the mobile communication network and/or other radio-based modems (e.g., Bluetooth or Wi-Fi). The wireless modem 128 is typically configured for communication with one or more cellular networks, such as a GSM network for data and voice communications within a single cellular network, a WCDMA (Wideband Code Division Multiple Access) network, an LTE (Long Term Evolution) network, a 4G LTE network, between cellular networks, or between the mobile apparatus and a public switched telephone network (PSTN) etc.
The apparatus 100 can further include at least one input/output port 130 and/or a physical connector 132, which can be a USB port, a USB-C port, IEEE 1394 (FireWire) port, and/or RS-232 port. The illustrated components are not required or all-inclusive, as any components can be deleted and other components can be added.
Any combination of the illustrated components disclosed in FIG. 1 , for example, at least one of the processor 102 and the memory 106 may constitute means for controlling a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and means for controlling a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
At 300 the first set of pixels 204 of the image sensor 200 is controlled by the controller 208 for exposure of a scene for a camera viewfinder. Thus, the first set of pixels 204 is exposed as a regular camera viewfinder and shown on a viewfinder screen. An image stream of a scene to be photographed is displayed on the viewfinder display 210. As the first set of pixels is used for exposure of a scene only for a camera viewfinder, there are no significant changes in the brightness of the viewfinder.
At 302 the second set of pixels of the image sensor 200 is controlled by the controller 208 for exposure analysis of the scene for images to be captured for a multi-exposure image. Thus, the first and second sets of pixels 204, 206 are used simultaneously for different purposes. Thus, as the second set of pixels 206 is used for exposure analysis of the scene for images to be captured for a multi-exposure image, the viewfinder image is not affected as it is provided by the first set of pixels 204. The controller 208 may refer to at least one processor connected to at least one memory, and the at least one memory may store program instructions that, when executed by the at least one processor, cause the performance of the steps 300 and 302.
When separate sets of pixels are used for a camera viewfinder and for exposure analysis of a scene for images to be captured for a multi-exposure image, exposure times of the multi-exposure image are customized for each scene while maintaining user experience. Further, in some embodiments, this enables a solution where the number of images to be taken for the multi-exposure image varies depending on the scene.
Exposure analysis may refer to one or more analysis steps that enable determining exposure times for the images to be captured for the multi-exposure image. For example, the exposure analysis may analyze light intensity of different areas of the scene. Further, the exposure analysis may also determine how many images are needed for the multi-capture image. The exposure analysis may use various image processing routines.
At 400 the first set of pixels 204 of the image sensor 200 is controlled for exposure of a scene for a camera viewfinder. Thus, the first set of pixels 204 is exposed as a regular camera viewfinder and shown on a viewfinder screen. An image stream of a scene to be photographed is displayed on the viewfinder display.
At 402 a series of images with different exposure times is caused to be captured with the second set of pixels 206. The purpose of the second set of pixels 206 is to enable analyzation of the light in the scene.
At 404 the captured series of images is analyzed to determine light intensity of different areas of the scene. In other words, how well exposed different areas are in the scene.
At 406 the data from the analysis is used to determine exposure times for the images to be captured in order to produce the final multi-exposure image. If at 408 the end results of the analysis is that only a single image is needed (i.e. no multi-exposure image is needed), at 410 a single exposure time is used for a single image without multi-exposure. On the other hand, if at 408 the end result is that multiple exposure times are needed, at 412 it may be determined whether a flash is needed in any of the images. In some embodiments, it may be that some parts of the scene can be illuminated only using the flash. If no flash is needed, at 416 the determined exposure times are used for the images to be captured in order to produce the multi-exposure image. If flash is needed in at least one of the images, flash control information 414 is provided in order to synchronize the flash with the exposure times.
Further, in one embodiment, an apparatus may comprise two separate image sensors, and the controller 208 is configured to control a first image sensor for exposure of a scene for a camera viewfinder. Simultaneously, the controller 208 is configured to control a second image sensor having at least two sets of pixels enabling different exposure times. The controller 208 is configured to control a first set and a second set of image pixels of the second image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image. When two separate image sensors are used, it is possible to use different exposure times with the first and second sets of pixels and thus to shorten the time needed for determining the final exposure times for the images to be captured for the multi-exposure image.
In some embodiments, when determining exposure times for the images to be captured in order to produce the final multi-exposure image, depending on the scene and based on the analysis, the number of images to be captured for the multi-exposure image may be adaptable. A more difficult scene in terms of lighting may require more images to be taken in order to produce the final multi-exposure image than a scene having a different type of lighting conditions. Further, in some embodiments, if a face or faces is detected in the scene, this may cause the analysis to keep the number of images to be captured low or lower than in a situation where there are no faces detected. This may be due to the fact that the face provides an indication that movement is probable in the images to be captured. Further, in some embodiments, movement may be detected based on the captured series of images, and information relating to the movement may be used in determining exposure times and/or the number of images to be captured for the multi-exposure image.
When separate sets of pixels are used for a camera viewfinder and for exposure analysis of a scene for images to be captured for a multi-exposure image, exposure times of the multi-exposure image are customized for each scene while maintaining user experience. Further, in some embodiments, this enables a solution where the number of images to be taken for the multi-exposure image varies depending on the scene.
According to another aspect, there is provided an apparatus comprising at least one processing unit and at least one memory. The at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to control a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and control a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
In an embodiment, the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to cause capture of a series of images with different exposure times with the second set of pixels, analyze the series of images to determine light intensity of different areas of the scene, and determine exposure times for the images to be captured for the multi-exposure image based on the analysis.
In an embodiment, in any combination with any of the above embodiments, the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to determine based on the analysis whether flash is to be used in any of the images to be captured for the multi-exposure image.
In an embodiment, in any combination with any of the above embodiments, the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to detect a face in the series of images, and limit the number of the images to be captured for the multi-exposure image in response to the detection.
In an embodiment, in any combination with any of the above embodiments, the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to adapt, based on the analysis, the number of images to be captured for the multi-exposure image.
In an embodiment, in any combination with any of the above embodiments, the apparatus is implemented as one of a mobile phone, a mobile device or a digital camera.
In an embodiment, in any combination with any of the above embodiments, the image sensor supports interlaced exposure.
According to another aspect, there is provided an apparatus comprising a camera comprising an image sensor configured to capture image data of a scene, the image sensor having at least two sets of pixels enabling different exposure times, a viewfinder configured to display image data captured with the image sensor, and a controller configured to control a first set of pixels of the image sensor for exposure for the viewfinder and a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
In an embodiment, the controller configured to cause capture of a series of images with different exposure times with the second set of pixels, analyze the series of images to determine light intensity of different areas of the scene, and determine exposure times for the images to be captured for the multi-exposure image based on the analysis.
In an embodiment, in any combination with any of the above embodiments, the controller is configured to determine based on the analysis whether flash is to be used in the images to be captured for the multi-exposure image.
In an embodiment, in any combination with any of the above embodiments, the controller is configured to adapt, based on the analysis, the number of images to be captured for the multi-exposure image.
In an embodiment, in any combination with any of the above embodiments, the controller is configured to detect a face in the series of images, and limit the number of the images to be captured for the multi-exposure image in response to the detection.
In an embodiment, in any combination with any of the above embodiments, the apparatus is implemented as one of a mobile phone, a mobile device or a digital camera.
In an embodiment, in any combination with any of the above embodiments, the image sensor supports interlaced exposure.
According to another aspect, there is provided a method comprising controlling, by at least one processor, a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and controlling, by the at least one processor, a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
In an embodiment, the method further comprises causing, by the at least one processor, capture of a series of images with different exposure times with the second set of pixels, analyzing, by the at least one processor, the series of images to determine light intensity of different areas of the scene, and determining, by the at least one processor, exposure times for the images to be captured for the multi-exposure image based on the analysis.
In an embodiment, in any combination with any of the above embodiments, the method further comprises determining, by the at least one processor, based on the analysis whether flash is to be used in any of the images to be captured for the multi-exposure image.
In an embodiment, in any combination with any of the above embodiments, the method further comprises adapting, by the at least one processor, based on the analysis the number of images to be captured for the multi-exposure image.
In an embodiment, in any combination with any of the above embodiments, the method further comprises detecting, by the at least one processor, a face in the series of images, and limiting, by the at least one processor, the number of the images to be captured for the multi-exposure image in response to the detection.
In an embodiment, in any combination with any of the above embodiments, the image sensor supports interlaced exposure.
According to another aspect, there is provided a computer program comprising program code, which when executed by at least one processor, causes an apparatus to control a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and control a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
According to another aspect, there is provided a computer-readable medium comprising a computer program comprising program code, which when executed by at least one processor, causes an apparatus to control a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and control a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
According to another aspect, there is provided an apparatus comprising means for controlling a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two sets of pixels enabling different exposure times, and means for controlling a second set of pixels of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image.
Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), Graphics Processing Units (GPUs).
The functions described herein performed by a controller or a processor may be performed by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. Examples of tangible storage media include disks, thumb drives, memory etc. and do not include propagated signals. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.
Although the subject matter may have been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equivalent features and acts are intended to be within the scope of the claims.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages.
Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
It will be understood that the above description is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this specification. In particular, the individual features, elements, or parts described in the context of one example, may be connected in any combination to any other example also.
Claims (20)
1. An apparatus comprising:
at least one processing unit;
at least one memory;
wherein the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to:
control a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two separate sets of pixels enabling different exposure times;
control a second set of pixels of the image sensor, separate from the first set of pixels of the image sensor, for exposure analysis of the scene for images to be captured for a multi-exposure image; and
capture and analyze an image with the second set of pixels to determine light intensity of different areas of the scene.
2. An apparatus of claim 1 , wherein the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to:
cause capture of a series of images with different exposure times with the second set of pixels;
analyze the series of images to determine light intensity of different areas of the scene; and
determine exposure times for the images to be captured for the multi-exposure image based on the analysis.
3. An apparatus of claim 2 , wherein the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to:
determine based on the analysis whether flash is to be used in any of the images to be captured for the multi-exposure image.
4. An apparatus of claim 2 , wherein the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to:
detect a face in the series of images; and
limit the number of the images to be captured for the multi-exposure image in response to the detection.
5. An apparatus of claim 2 , wherein the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to:
adapt, based on the analysis, the number of images to be captured for the multi-exposure image.
6. An apparatus of claim 1 , wherein the image sensor supports interlaced exposure.
7. An apparatus comprising:
a camera comprising an image sensor configured to capture image data of a scene, the image sensor having at least two separate sets of pixels enabling different exposure times;
a viewfinder configured to display image data captured with the image sensor; and
a controller configured to control a first set of pixels of the image sensor for exposure for the viewfinder and a second set of pixels, separate from the first set of pixels of the image sensor, of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image, and capture and analyze an image with the second set of pixels to determine light intensity of different areas of the scene.
8. An apparatus of claim 7 , wherein the controller is configured to:
cause capture of a series of images with different exposure times with the second set of pixels;
analyze the series of images to determine light intensity of different areas of the scene; and
determine exposure times for the images to be captured for the multi-exposure image based on the analysis.
9. An apparatus of claim 8 , wherein the controller is configured to:
determine based on the analysis whether flash is to be used in the images to be captured for the multi-exposure image.
10. An apparatus of claim 8 , wherein the controller is configured to:
adapt, based on the analysis, the number of images to be captured for the multi-exposure image.
11. An apparatus of claim 8 , wherein the controller is configured to:
detect a face in the series of images; and
limit the number of the images to be captured for the multi-exposure image in response to the detection.
12. An apparatus of claim 7 , implemented as one of a mobile phone, a mobile device or a digital camera.
13. An apparatus of claim 7 , wherein the image sensor supports interlaced exposure.
14. A method comprising:
controlling, by at least one processor, a first set of pixels of an image sensor for exposure of a scene for a camera viewfinder, the image sensor having at least two separate sets of pixels enabling different exposure times;
controlling, by the at least one processor, a second set of pixels, separate from the first set of pixels of the image sensor, of the image sensor for exposure analysis of the scene for images to be captured for a multi-exposure image; and
capturing and analyzing an image with the second set of pixels to determine light intensity of different areas of the scene.
15. A method of claim 14 , further comprising:
causing, by the at least one processor, capture of a series of images with different exposure times with the second set of pixels;
analyzing, by the at least one processor, the series of images to determine light intensity of different areas of the scene; and
determining, by the at least one processor, exposure times for the images to be captured for the multi-exposure image based on the analysis.
16. A method of claim 15 , further comprising:
determining, by the at least one processor, based on the analysis whether flash is to be used in any of the images to be captured for the multi-exposure image.
17. A method of claim 15 , further comprising:
adapting, by the at least one processor, based on the analysis the number of images to be captured for the multi-exposure image.
18. A method of claim 15 , further comprising:
detecting, by the at least one processor, a face in the series of images; and
limiting, by the at least one processor, the number of the images to be captured for the multi-exposure image in response to the detection.
19. A method of claim 14 , wherein the image sensor supports interlaced exposure.
20. An apparatus of claim 1 , wherein the two separate sets of pixels are controlled at the same time.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/253,623 US10097766B2 (en) | 2016-08-31 | 2016-08-31 | Provision of exposure times for a multi-exposure image |
PCT/US2017/048096 WO2018044631A1 (en) | 2016-08-31 | 2017-08-23 | Provision of exposure times for a multi-exposure image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/253,623 US10097766B2 (en) | 2016-08-31 | 2016-08-31 | Provision of exposure times for a multi-exposure image |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180063400A1 US20180063400A1 (en) | 2018-03-01 |
US10097766B2 true US10097766B2 (en) | 2018-10-09 |
Family
ID=59923543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/253,623 Active US10097766B2 (en) | 2016-08-31 | 2016-08-31 | Provision of exposure times for a multi-exposure image |
Country Status (2)
Country | Link |
---|---|
US (1) | US10097766B2 (en) |
WO (1) | WO2018044631A1 (en) |
Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050083419A1 (en) * | 2003-10-21 | 2005-04-21 | Konica Minolta Camera, Inc. | Image sensing apparatus and image sensor for use in image sensing apparatus |
US7239805B2 (en) | 2005-02-01 | 2007-07-03 | Microsoft Corporation | Method and system for combining multiple exposure images having scene and camera motion |
US20080094486A1 (en) | 2006-10-20 | 2008-04-24 | Chiou-Shann Fuh | Method and system of generating high dynamic range image corresponding to specific scene |
US20090003708A1 (en) * | 2003-06-26 | 2009-01-01 | Fotonation Ireland Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US20090140122A1 (en) * | 2007-10-01 | 2009-06-04 | Nikon Corporation | Solid-state imaging device, electronic camera |
US20100066858A1 (en) * | 2008-09-12 | 2010-03-18 | Sony Corporation | Imaging apparatus and imaging mode control method |
US20100271512A1 (en) | 2009-04-23 | 2010-10-28 | Haim Garten | Multiple exposure high dynamic range image capture |
US8199222B2 (en) | 2007-03-05 | 2012-06-12 | DigitalOptics Corporation Europe Limited | Low-light video frame enhancement |
US20120307130A1 (en) * | 2011-05-30 | 2012-12-06 | Samsung Electronics Co., Ltd. | Digital photographing apparatus, auto-focusing method, and computer-readable storage medium for executing the auto-focusing method |
US8582001B2 (en) | 2009-04-08 | 2013-11-12 | Csr Technology Inc. | Exposure control for high dynamic range image capture |
US20140055638A1 (en) * | 2012-08-27 | 2014-02-27 | Samsung Electronics Co., Ltd. | Photographing apparatus, method of controlling the same, and computer-readable recording medium |
US8737755B2 (en) | 2009-12-22 | 2014-05-27 | Apple Inc. | Method for creating high dynamic range image |
US20140307141A1 (en) | 2011-12-27 | 2014-10-16 | Fujifilm Corporation | Color imaging element and imaging apparatus |
US8885978B2 (en) | 2010-07-05 | 2014-11-11 | Apple Inc. | Operating a device to capture high dynamic range images |
US8890986B2 (en) | 2011-04-27 | 2014-11-18 | Aptina Imaging Corporation | Method and apparatus for capturing high dynamic range images using multi-frame interlaced exposure images |
US8977073B2 (en) | 2008-12-16 | 2015-03-10 | Samsung Electronics Co., Ltd. | Apparatus and method for blending multiple images |
US8988567B2 (en) | 2011-09-29 | 2015-03-24 | International Business Machines Corporation | Multiple image high dynamic range imaging from a single sensor array |
US20150169204A1 (en) | 2013-12-12 | 2015-06-18 | Google Inc. | Interactive display of high dynamic range images |
US9077910B2 (en) | 2011-04-06 | 2015-07-07 | Dolby Laboratories Licensing Corporation | Multi-field CCD capture for HDR imaging |
US20150201118A1 (en) | 2014-01-10 | 2015-07-16 | Qualcomm Incorporated | System and method for capturing digital images using multiple short exposures |
US20150312464A1 (en) | 2014-04-25 | 2015-10-29 | Himax Imaging Limited | Multi-exposure imaging system and method for eliminating rolling shutter flicker |
US20160173751A1 (en) * | 2013-08-22 | 2016-06-16 | Sony Corporation | Control device, control method, and electronic device |
US20160248990A1 (en) | 2015-02-23 | 2016-08-25 | Samsung Electronics Co., Ltd. | Image sensor and image processing system including same |
US20170085806A1 (en) * | 2014-11-21 | 2017-03-23 | Motorola Mobility Llc | Method and apparatus for synchronizing auto exposure between chromatic pixels and panchromatic pixels in a camera system |
-
2016
- 2016-08-31 US US15/253,623 patent/US10097766B2/en active Active
-
2017
- 2017-08-23 WO PCT/US2017/048096 patent/WO2018044631A1/en active Application Filing
Patent Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090003708A1 (en) * | 2003-06-26 | 2009-01-01 | Fotonation Ireland Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US20050083419A1 (en) * | 2003-10-21 | 2005-04-21 | Konica Minolta Camera, Inc. | Image sensing apparatus and image sensor for use in image sensing apparatus |
US7239805B2 (en) | 2005-02-01 | 2007-07-03 | Microsoft Corporation | Method and system for combining multiple exposure images having scene and camera motion |
US20080094486A1 (en) | 2006-10-20 | 2008-04-24 | Chiou-Shann Fuh | Method and system of generating high dynamic range image corresponding to specific scene |
US8199222B2 (en) | 2007-03-05 | 2012-06-12 | DigitalOptics Corporation Europe Limited | Low-light video frame enhancement |
US20090140122A1 (en) * | 2007-10-01 | 2009-06-04 | Nikon Corporation | Solid-state imaging device, electronic camera |
US20100066858A1 (en) * | 2008-09-12 | 2010-03-18 | Sony Corporation | Imaging apparatus and imaging mode control method |
US8977073B2 (en) | 2008-12-16 | 2015-03-10 | Samsung Electronics Co., Ltd. | Apparatus and method for blending multiple images |
US8582001B2 (en) | 2009-04-08 | 2013-11-12 | Csr Technology Inc. | Exposure control for high dynamic range image capture |
US20100271512A1 (en) | 2009-04-23 | 2010-10-28 | Haim Garten | Multiple exposure high dynamic range image capture |
US8737755B2 (en) | 2009-12-22 | 2014-05-27 | Apple Inc. | Method for creating high dynamic range image |
US8885978B2 (en) | 2010-07-05 | 2014-11-11 | Apple Inc. | Operating a device to capture high dynamic range images |
US9077910B2 (en) | 2011-04-06 | 2015-07-07 | Dolby Laboratories Licensing Corporation | Multi-field CCD capture for HDR imaging |
US8890986B2 (en) | 2011-04-27 | 2014-11-18 | Aptina Imaging Corporation | Method and apparatus for capturing high dynamic range images using multi-frame interlaced exposure images |
US20120307130A1 (en) * | 2011-05-30 | 2012-12-06 | Samsung Electronics Co., Ltd. | Digital photographing apparatus, auto-focusing method, and computer-readable storage medium for executing the auto-focusing method |
US8988567B2 (en) | 2011-09-29 | 2015-03-24 | International Business Machines Corporation | Multiple image high dynamic range imaging from a single sensor array |
US20140307141A1 (en) | 2011-12-27 | 2014-10-16 | Fujifilm Corporation | Color imaging element and imaging apparatus |
US20140055638A1 (en) * | 2012-08-27 | 2014-02-27 | Samsung Electronics Co., Ltd. | Photographing apparatus, method of controlling the same, and computer-readable recording medium |
US20160173751A1 (en) * | 2013-08-22 | 2016-06-16 | Sony Corporation | Control device, control method, and electronic device |
US20150169204A1 (en) | 2013-12-12 | 2015-06-18 | Google Inc. | Interactive display of high dynamic range images |
US20150201118A1 (en) | 2014-01-10 | 2015-07-16 | Qualcomm Incorporated | System and method for capturing digital images using multiple short exposures |
US9307161B2 (en) | 2014-01-10 | 2016-04-05 | Qualcomm Incorporated | System and method for capturing digital images using multiple short exposures |
US20150312464A1 (en) | 2014-04-25 | 2015-10-29 | Himax Imaging Limited | Multi-exposure imaging system and method for eliminating rolling shutter flicker |
US20170085806A1 (en) * | 2014-11-21 | 2017-03-23 | Motorola Mobility Llc | Method and apparatus for synchronizing auto exposure between chromatic pixels and panchromatic pixels in a camera system |
US20160248990A1 (en) | 2015-02-23 | 2016-08-25 | Samsung Electronics Co., Ltd. | Image sensor and image processing system including same |
Non-Patent Citations (4)
Title |
---|
"International Search Report and Written Opinion Issued in PCT Application No. PCT/US2017/048096", dated Nov. 29, 2017, 13 Pages. |
Cheng, et al., "High Dynamic Range image capturing by Spatial Varying Exposed Color Filter Array with specific demosaicking algorithm", In Proceedings of IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, Aug. 23, 2009, pp. 648-653. |
Cho, et al., "Alternating line high dynamic range imaging", In Proceedings of the 17th International Conference on Digital Signal Processing, Jul. 6, 2011, 6 pages. |
Huang, et al., "High dynamic range imaging technology for micro camera array", In Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, Dec. 9, 2014, 4 pages. |
Also Published As
Publication number | Publication date |
---|---|
US20180063400A1 (en) | 2018-03-01 |
WO2018044631A1 (en) | 2018-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10440284B2 (en) | Determination of exposure time for an image frame | |
KR102444085B1 (en) | Portable communication apparatus and method for displaying images thereof | |
CN109101873B (en) | Electronic device for providing characteristic information of an external light source for an object of interest | |
CN105282430B (en) | Electronic device using composition information of photograph and photographing method using the same | |
KR20160031706A (en) | Method for enhancing noise characteristics of image and an electronic device thereof | |
US9912846B2 (en) | Obtaining calibration data of a camera | |
US11514263B2 (en) | Method and apparatus for processing image | |
CN110059624B (en) | Method and apparatus for detecting living body | |
US20220294961A1 (en) | Parallel high dynamic exposure range sensor | |
WO2012088702A1 (en) | Method and apparatus for providing a mechanism for gesture recognition | |
KR20170076398A (en) | Apparatus and method for a synchronizing a data of electronic device | |
CN103869977B (en) | Method for displaying image, device and electronics | |
CN109348206A (en) | Image white balancing treatment method, device, storage medium and mobile terminal | |
KR102595449B1 (en) | Electronic apparatus and method for controlling thereof | |
CN109685802B (en) | Low-delay video segmentation real-time preview method | |
US20150172541A1 (en) | Camera Array Analysis Mechanism | |
CN110119459A (en) | Image data retrieval method and image data retrieving apparatus | |
CN110232417B (en) | Image recognition method and device, computer equipment and computer readable storage medium | |
US10097766B2 (en) | Provision of exposure times for a multi-exposure image | |
US10068151B2 (en) | Method, device and computer-readable medium for enhancing readability | |
CN110942033A (en) | Method, apparatus, electronic device and computer medium for pushing information | |
US20180035055A1 (en) | Facilitating capturing a digital image | |
WO2023157071A1 (en) | Information processing device, information processing method, and recording medium | |
CN113379644A (en) | Training sample obtaining method and device based on data enhancement and electronic equipment | |
CN111756705A (en) | Attack testing method, device, equipment and storage medium of in-vivo detection algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PETTERSSON, GUSTAF;BENCHEMSI, KARIM;REEL/FRAME:039607/0277 Effective date: 20160831 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |