Nothing Special   »   [go: up one dir, main page]

CN106204500B - A method of realizing that different cameral shooting Same Scene color of image remains unchanged - Google Patents

A method of realizing that different cameral shooting Same Scene color of image remains unchanged Download PDF

Info

Publication number
CN106204500B
CN106204500B CN201610608435.3A CN201610608435A CN106204500B CN 106204500 B CN106204500 B CN 106204500B CN 201610608435 A CN201610608435 A CN 201610608435A CN 106204500 B CN106204500 B CN 106204500B
Authority
CN
China
Prior art keywords
image
camera
different cameral
color
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610608435.3A
Other languages
Chinese (zh)
Other versions
CN106204500A (en
Inventor
李永杰
高绍兵
张明
彭鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610608435.3A priority Critical patent/CN106204500B/en
Publication of CN106204500A publication Critical patent/CN106204500A/en
Application granted granted Critical
Publication of CN106204500B publication Critical patent/CN106204500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Of Color Television Signals (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The invention discloses a kind of methods realized different cameral shooting Same Scene color of image and remained unchanged, the characteristics of image that extraction different cameral takes first, then the image of one group of known luminaire color and its real light sources are converted into corresponding image and real light sources under different cameral, image after conversion is gone to school the regression matrix between the characteristics of image and real light sources that acquistion is shot to different cameral, to estimate the light source colour of image that different cameral takes, and then remove the colour cast for the image that different cameral takes;Camera transition matrix is finally utilized, the image after each camera removal colour cast is unified for the image shot under same camera, achievees the purpose that realize that the color of image of different cameral shooting Same Scene remains unchanged.

Description

A method of realizing that different cameral shooting Same Scene color of image remains unchanged
Technical field
The invention belongs to computer visions and technical field of image processing, and in particular to a kind of realization different cameral shooting is same The design of the one constant method of scene image color keep.
Background technology
The color of image obtained when different cameral shooting Same Scene is often widely different, even the camera of same brand Sometimes it also will appear this case, for example CANON 1D cameras and the image of the Same Scene of CANON 5D cameras shooting just have very Apparent difference.The reason of generating this phenomenon, being firstly because different cameral has different color sensitivity receptance functions, Cause the original image color taken with regard to different.Secondly, different camera internals can carry out not the picture taken Same processing, such as the enhancing of white balance, edge, denoising, compression etc., wherein different degrees of white balance can further influence again Color of image so that difference occurs in the same width color of image of the final shooting of different cameral.
For occasions such as the practical applications such as color-match of live telecast, 3D cameras, different cameral is needed to shoot same Piece image color does not have difference.How to realize that the color of image of different cameral shooting Same Scene remains unchanged just seems particularly It is important.Comparing classical method has the method proposed in 2014 such as Javier Vazquez-Corral, bibliography: J.Vazquez-Corral and M.Bertalm í o, " Color stabilization along time and across Shots of the same scene, for one or several cameras of unknown specifications " IEEE Trans.Image Process, vol.23, pp.4564-4575, Oct.2014.This method is shot by different cameral The difference of image of Same Scene carry out learning correction matrix, and then complete the correction to picture color.The main of this method lacks Point is the complicated learning process of needs, is required for the picture of a large amount of same scene raw and jpg formats to instruct in each camera Practice, calculate complexity, flexibility is poor.This method can not also correct in image due to colour cast problem caused by light source difference simultaneously.
Invention content
The purpose of the present invention is to solve the Same Scene images that different cameral in the prior art takes not to share the same light Color different problems under source, it is proposed that a method of realizing that different cameral shooting Same Scene color of image remains unchanged.
The technical scheme is that:A kind of side for realizing different cameral shooting Same Scene color of image and remaining unchanged Method includes the following steps:
The characteristics of image that S1, extraction different cameral take:It is shot using static Illuminant estimation method estimation different cameral The light source of the same piece image arrived, will be per the corresponding all estimated results of piece image and cross term as characteristics of image;
S2, learn different cameral feature and light source regression matrix:By the training image of real light sources known to one group and Its real light sources is converted in step S1 corresponding image and real light sources under different cameral using camera transition matrix, is converting Feature identical with step S1 is extracted on image afterwards respectively, obtains corresponding eigenmatrix;By the method for recurrence, calculate Regression matrix between the corresponding feature of different cameral and light source;
The colour cast for the image that S3, correction different cameral are shot:For each camera, the feature that will be extracted in step S1 It is multiplied with the obtained regression matrix of step S2, obtains the final estimation light source of the image of camera shooting;Utilize each face of image The mode that colouring component is divided by with light source, removes the colour cast for the image that each camera takes;
The influence of S4, correcting camera color sensitivity receptance function:Using camera transition matrix, step S3 is obtained every A camera without colour cast image be unified under the same camera that shooting obtains without colour cast image.
Further, the static Illuminant estimation method in step S1 is Grey-World and Grey-Edge methods, specific mistake Journey is as follows:Calculative feature is respectively the mean value in tri- channels image R, G, B and the mean value of three channel edges, introduces and hands over Item is pitched, the final mean value for being characterized as tri- channels R, G, B, the mean value of tri- channel edges of R, G, B, the channels R are multiplied with the channels G Mean value later opens radical sign, the channels R be multiplied with channel B after mean value open radical sign and the channels G be multiplied with B after mean value Radical sign is opened, totally 9 features.
Further, the method for the recurrence in step S2 is nonlinear neural network, support vector machines or least square method.
Further, the camera transition matrix in step S2 and step S4 calculates different cameral by least square method Transition matrix of the receptance function between the response of same given surface reflectivity obtains.
The beneficial effects of the invention are as follows:The present invention extracts the feature for the image that different cameral takes first, then by one The image and its real light sources of group known luminaire color are converted to corresponding image and real light sources under different cameral, after conversion Image go to school the regression matrix between the characteristics of image and real light sources that acquistion is shot to different cameral, to estimate different phases The light source colour for the image that machine takes, and then remove the colour cast for the image that different cameral takes;Finally camera is utilized to convert Image after each camera removal colour cast is unified for the image shot under same camera, reaches realization not by matrix The purpose that the color of image of Same Scene remains unchanged is shot with camera.The present invention does not have any parameter, the conversion between camera Regression matrix between characteristics of image and real light sources that matrix and each camera take need to only calculate once To decide, it can directly be built in color of the camera internal for stablizing the same piece image that different cameral takes.
Description of the drawings
Fig. 1 is a kind of method stream realized different cameral shooting Same Scene color of image and remained unchanged provided by the invention Cheng Tu.
Fig. 2 is the pic1 schematic diagrames of the embodiment of the present invention.
Fig. 3 is the pic2 schematic diagrames of the embodiment of the present invention.
Fig. 4 is the 930 camera response function curve graphs of SONY DXC of the embodiment of the present invention.
Fig. 5 is the NIKON D70 camera response function curve graphs of the embodiment of the present invention.
Fig. 6 is the pic3 schematic diagrames of the embodiment of the present invention.
Fig. 7 is the pic4 schematic diagrames of the embodiment of the present invention.
Fig. 8 is the pic5 schematic diagrames of the embodiment of the present invention.
Specific implementation mode
The embodiment of the present invention is further described below in conjunction with the accompanying drawings.
Different cameral has different color sensitivity receptance functions, this difference can be by learning a camera conversion Matrix removes, and based on this, the present invention provides a kind of sides for realizing different cameral shooting Same Scene color of image and remaining unchanged Method, as shown in Figure 1, including the following steps:
The characteristics of image that S1, extraction different cameral take:It is shot using static Illuminant estimation method estimation different cameral The light source of the same piece image arrived, will be per the corresponding all estimated results of piece image and cross term as characteristics of image.
In the embodiment of the present invention, static Illuminant estimation method is using Grey-World and Grey-Edge methods, detailed process It is as follows:Calculative feature is respectively the mean value in tri- channels image R, G, B and the mean value of three channel edges, introduces and intersects , the final mean value for being characterized as tri- channels R, G, B, the mean value of tri- channel edges of R, G, B, the channels R are multiplied it with the channels G Mean value afterwards opens radical sign, the channels R be multiplied with channel B after mean value open radical sign and the channels G be multiplied with B after mean value open Radical sign, totally 9 features.
S2, learn different cameral feature and light source regression matrix:By the training image of real light sources known to one group and Its real light sources is converted in step S1 corresponding image and real light sources under different cameral using camera transition matrix, is converting Feature identical with step S1 is extracted on image afterwards respectively, obtains corresponding eigenmatrix;By the method for recurrence, calculate Regression matrix between the corresponding feature of different cameral and light source.
It is linear that nonlinear neural network, support vector machines, least square method etc. may be used in the method returned in the step With nonlinear homing method.
The colour cast for the image that S3, correction different cameral are shot:For each camera, the feature that will be extracted in step S1 It is multiplied with the obtained regression matrix of step S2, obtains the final estimation light source of the image of camera shooting;Utilize each face of image The mode that colouring component is divided by with light source, removes the colour cast for the image that each camera takes;
The influence of S4, correcting camera color sensitivity receptance function:Using camera transition matrix, step S3 is obtained every A camera without colour cast image be unified under the same camera that shooting obtains without colour cast image.
Camera transition matrix in step S2 and step S4 calculates the receptance function pair of different cameral by least square method Transition matrix between the response of same given surface reflectivity obtains.
Below with a specific embodiment to a kind of realization different cameral shooting Same Scene image face provided by the invention The method that color remains unchanged is described further:
Artificial synthesized surface is carried above and below the image library website for estimating scene light source colour internationally recognized at present Reflectivity S, and download 321 width colour cast image T and its real light sources L conducts that the image library is shot by 930 cameras of SONY DXC Training set.SONY DXC 930, NIKON D70 and CANON5D are downloaded from internationally recognized camera response function website simultaneously Camera color sensitivity receptance function data.It downloads under two width same scenes and is shot by NIKON D70 and CANON 5D simultaneously Image be respectively designated as pic1 and pic2 as shown in Figure 2 and Figure 3 and tested.Two images are all without passing through any camera The pretreatment (such as tint correction, the correction of gamma values) of itself.Then detailed step of the invention is as follows:
The characteristics of image that S1, extraction different cameral take:Estimate different cameral using static Illuminant estimation method The light source for the same piece image (pic1 and pic2) that (NIKON D70 and CANON 5D) takes, will be corresponding per piece image All estimated results and cross term are as characteristics of image.
Here static Illuminant estimation method includes Grey-World, Grey-Edge, White-Patch etc. a variety of classical Static Illuminant estimation method.
In the embodiment of the present invention, estimate light sources using two kinds of static methods of Grey-World and Grey-Edge, with pic1 with For pic2, light source approach is estimated using Grey-World, the mean value in three channels being calculated is respectively (0.1600 0.72540.1146), (0.5169 0.3170 0.1661) estimate light source approach using Grey-Edge, are calculated three and lead to The mean value at road edge is respectively (0.1572 0.7295 0.1133), (0.5126 0.3213 0.1661).After introducing cross term, The eigenmatrix of pic1 is stc1 and the eigenmatrix of pic2 is that stc2 is respectively:
Stc1=[0.1600 0.7254 0.1146 0.1572 0.7295 0.1133 0.3407 0.1354 0.2883];
Stc2=[0.5169 0.3170 0.1661 0.5126 0.3213 0.1661 0.4048 0.2930 0.2295]。
S2, learn different cameral feature and light source regression matrix:By the training image of real light sources known to one group (SONY DXC 930 arrive NIKON D70 and SONY using camera transition matrix by (321 width colour cast image T) and its real light sources L DXC930 to CANON 5D) be converted to corresponding image under different cameral in step S1 (NIKON D70 and CANON 5D) (TN with TC) with real light sources (LN and LC), feature identical with step S1 is extracted on image (TN and TC) after conversion respectively, is obtained Corresponding eigenmatrix (FN and FC).By the method for recurrence, returning between the corresponding feature of different cameral and light source is calculated Return Matrix C 1 and C2.
Here the method returned may be used nonlinear neural network, support vector machines, least square method etc. linearly with it is non- Linear homing method.Using least square method as homing method in the embodiment of the present invention,
Specifically calculating process is:By formula (1) (2) and combine least square method, be calculated NIKON D70 with The corresponding regression matrix C1 and C2 of CANON5D cameras.
LN=FN × C1 (1)
LC=FC × C2 (2)
Final calculation result is:
The specific camera transition matrix process for calculating SONY DXC 930 to NIKON D70 is as follows:
The response R1 of 930 cameras pair of a SONY DXC given surface reflectivity S is calculated according to formula (3) first:
R1=CSS1 × S (3)
CSS1 indicates that the camera color sensitivity receptance function of SONY DXC 930, function curve are as shown in Figure 4 in formula.
Then the response R2 of a NIKON D70 cameras pair given surface reflectivity S is calculated according to formula (4):
R2=CSS2 × S (4)
CSS2 indicates that the camera color sensitivity receptance function of NIKON D70, function curve are as shown in Figure 5 in formula.
Finally by formula (5), and least square method is combined, the camera that SONY DXC 930 arrive NIKON D70 is calculated Transition matrix C.
R2=R1 × C (5)
Final calculation result is:
The camera transition matrix that SONY DXC 930 arrive CANON 5D can similarly be calculated:
The colour cast for the image that S3, correction different cameral are shot:For each camera (NIKON D70 and CANON 5D), The regression matrix (C1 and C2) that the feature extracted in step S1 (stc1 and stc2) is obtained with step S2 is multiplied, the phase is obtained The final estimation light source L1=stc1*C1=(1.3831 2.1880 0.4735) of the image of machine shooting and L2=stc2*C2= (1.8143 1.1406 0.6473) are L1=(0.3420 0.5410 0.1171), L2=(0.5037 after normalization 0.3166 0.1797).By each color component of image and light source be divided by the way of, remove the figure that each camera takes The colour cast of picture, obtain pic1 and pic2 without colour cast figure, be respectively designated as pic3 and pic4, as shown in Figure 6, Figure 7.
By taking a pixel (0.0331,0.1570,0.0255) in pic1 as an example, become (0.0331/ after removing colour cast 0.3420,0.1570/0.5410,0.0255/0.1171)=(0.0968 0.2902 0.2178), position is corresponded in as pic3 The pixel value set.
The influence of S4, correcting camera color sensitivity receptance function:Using camera transition matrix, (NIKON D70 are arrived CANON5D), by each camera that step S3 is obtained without colour cast image be unified for that shooting under the same camera obtains without colour cast figure Picture.
The specific camera transition matrix for calculating NIKON D70 to CANON 5D is arrived with SONY DXC 930 in above-mentioned steps S2 The camera transform matrix calculations method of NIKON D70 is the same.The result of calculating is:
By step S3 obtain without for colour cast image pic3 and pic4, the corresponding camera of the two is respectively NIKON D70 With CANON 5D, here for being unified for CANON 5D camera images, the camera of pic3 and NIKON D70 to CANON 5D turns Matrix multiple is changed, obtained image is named as pic5, as shown in Figure 8.
By taking the pic3 pixels (0.0968 0.2902 0.2178) after eliminating colour cast in step S3 as an example, with NIKON The camera transition matrix multiplication of D70 to CANON 5D becomes:
Then (0.2051 0.2168 0.2950) are the pixel value of corresponding position in pic5.
It is mainly illustrated above as example using the single pixel of image point, reality is all pictures in entire image when calculating It is carried out on vegetarian refreshments.
Comparison diagram 7 and Fig. 8 can be seen that:The color of the two is compared with artwork pic1 (Fig. 2) and pic2 (Fig. 3), more adjunction Closely, show that the color of the same piece image taken by two kinds of cameras becomes stable.
Those of ordinary skill in the art will understand that the embodiments described herein, which is to help reader, understands this hair Bright principle, it should be understood that protection scope of the present invention is not limited to such specific embodiments and embodiments.This field Those of ordinary skill can make according to the technical disclosures disclosed by the invention various does not depart from the other each of essence of the invention The specific variations and combinations of kind, these variations and combinations are still within the scope of the present invention.

Claims (4)

1. a kind of method realized different cameral shooting Same Scene color of image and remained unchanged, which is characterized in that including following Step:
The characteristics of image that S1, extraction different cameral take:It is taken using static Illuminant estimation method estimation different cameral It, will be per the corresponding all estimated results of piece image and cross term is as characteristics of image with the light source of piece image;The intersection Item is the internal chiasma of the corresponding estimated result of each width image;
S2, learn different cameral feature and light source regression matrix:By the training image of real light sources known to one group and its very Real light source is converted in step S1 corresponding image and real light sources under different cameral using camera transition matrix, after conversion Feature identical with step S1 is extracted on image respectively, obtains corresponding eigenmatrix;By the method for recurrence, it is calculated not Regression matrix with camera between corresponding feature and light source;
The colour cast for the image that S3, correction different cameral are shot:For each camera, the feature and step that will be extracted in step S1 The regression matrix that rapid S2 is obtained is multiplied, and obtains the final estimation light source of the image of camera shooting;Utilize each color of image point The mode that amount is divided by with the final estimation light source, removes the colour cast for the image that each camera takes;
The influence of S4, correcting camera color sensitivity receptance function:Using camera transition matrix, each phase that step S3 is obtained Machine without colour cast image be unified under the same camera that shooting obtains without colour cast image.
2. the method according to claim 1 realized different cameral shooting Same Scene color of image and remained unchanged, special Sign is that the static Illuminant estimation method in the step S1 is Grey-World and Grey-Edge methods, and detailed process is such as Under:Calculative feature is respectively the mean value in tri- channels image R, G, B and the mean value of three channel edges, introduces and intersects , the final mean value for being characterized as tri- channels R, G, B, the mean value of tri- channel edges of R, G, B, the channels R are multiplied it with the channels G Mean value afterwards opens radical sign, the channels R be multiplied with channel B after mean value open radical sign and the channels G be multiplied with B after mean value open Radical sign, totally 9 features.
3. the method according to claim 1 realized different cameral shooting Same Scene color of image and remained unchanged, special Sign is that the method for the recurrence in the step S2 is nonlinear neural network, support vector machines or least square method.
4. the method according to claim 1 realized different cameral shooting Same Scene color of image and remained unchanged, special Sign is that the camera transition matrix in the step S2 and step S4 calculates the response letter of different cameral by least square method Transition matrix between several responses to same given surface reflectivity obtains.
CN201610608435.3A 2016-07-28 2016-07-28 A method of realizing that different cameral shooting Same Scene color of image remains unchanged Active CN106204500B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610608435.3A CN106204500B (en) 2016-07-28 2016-07-28 A method of realizing that different cameral shooting Same Scene color of image remains unchanged

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610608435.3A CN106204500B (en) 2016-07-28 2016-07-28 A method of realizing that different cameral shooting Same Scene color of image remains unchanged

Publications (2)

Publication Number Publication Date
CN106204500A CN106204500A (en) 2016-12-07
CN106204500B true CN106204500B (en) 2018-10-16

Family

ID=57495879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610608435.3A Active CN106204500B (en) 2016-07-28 2016-07-28 A method of realizing that different cameral shooting Same Scene color of image remains unchanged

Country Status (1)

Country Link
CN (1) CN106204500B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106768822B (en) * 2017-02-07 2018-12-21 中国航天空气动力技术研究院 A kind of flow field boundary layer shear stress measuring method
CN112866667B (en) * 2021-04-21 2021-07-23 贝壳找房(北京)科技有限公司 Image white balance processing method and device, electronic equipment and storage medium
WO2023240650A1 (en) * 2022-06-17 2023-12-21 北京小米移动软件有限公司 Color correction matrix calibration method and apparatus for camera module

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009067097A3 (en) * 2007-11-20 2009-09-17 Nikon Corporation White balance adjustment for scenes with varying illumination
CN101674490A (en) * 2009-09-23 2010-03-17 电子科技大学 Color image color constant method based on retina vision mechanism
CN101706964A (en) * 2009-08-27 2010-05-12 北京交通大学 Color constancy calculating method and system based on derivative structure of image
CN101930596A (en) * 2010-07-19 2010-12-29 赵全友 Color constancy method in two steps under a kind of complex illumination
CN102073995A (en) * 2010-12-30 2011-05-25 上海交通大学 Color constancy method based on texture pyramid and regularized local regression
CN102138157A (en) * 2008-08-30 2011-07-27 惠普开发有限公司 Color constancy method and system
CN102306384A (en) * 2011-08-26 2012-01-04 华南理工大学 Color constancy processing method based on single image
CN103957395A (en) * 2014-05-07 2014-07-30 电子科技大学 Color constancy method with adaptive capacity

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009067097A3 (en) * 2007-11-20 2009-09-17 Nikon Corporation White balance adjustment for scenes with varying illumination
CN102138157A (en) * 2008-08-30 2011-07-27 惠普开发有限公司 Color constancy method and system
CN101706964A (en) * 2009-08-27 2010-05-12 北京交通大学 Color constancy calculating method and system based on derivative structure of image
CN101674490A (en) * 2009-09-23 2010-03-17 电子科技大学 Color image color constant method based on retina vision mechanism
CN101930596A (en) * 2010-07-19 2010-12-29 赵全友 Color constancy method in two steps under a kind of complex illumination
CN102073995A (en) * 2010-12-30 2011-05-25 上海交通大学 Color constancy method based on texture pyramid and regularized local regression
CN102306384A (en) * 2011-08-26 2012-01-04 华南理工大学 Color constancy processing method based on single image
CN103957395A (en) * 2014-05-07 2014-07-30 电子科技大学 Color constancy method with adaptive capacity

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
IEEE TRANSACTIONS ON IMAGE PROCESSING the Same Scene, for One or Several Cameras of Unknown Specifications;Javier Vazquez-Corral 等;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20140730;第23卷(第10期);4564-4575 *
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Illumination;Hamid Reza Vaezi Joze 等;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20130830;第36卷(第5期);860-873 *
亮度补偿变换矩阵的颜色恒常性算法;袁兴生 等;《中国图象图形学报》;20120916;第17卷(第9期);1055-1060 *

Also Published As

Publication number Publication date
CN106204500A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
Li et al. Underwater scene prior inspired deep underwater image and video enhancement
Jin et al. Unsupervised night image enhancement: When layer decomposition meets light-effects suppression
US11055827B2 (en) Image processing apparatus and method
Fan et al. A generic deep architecture for single image reflection removal and image smoothing
Kalantari et al. Deep HDR video from sequences with alternating exposures
JP6905602B2 (en) Image lighting methods, devices, electronics and storage media
CN103517046B (en) Method, apparatus and computer program product for capturing video content
Hu et al. Exposure stacks of live scenes with hand-held cameras
CN106204513B (en) The methods, devices and systems of image procossing
Sharma et al. Nighttime visibility enhancement by increasing the dynamic range and suppression of light effects
An et al. A multi-exposure image fusion algorithm without ghost effect
CN106204500B (en) A method of realizing that different cameral shooting Same Scene color of image remains unchanged
CN103947184B (en) Use the capture apparatus and image processing equipment and its method of encoded light
CN105163047A (en) HDR (High Dynamic Range) image generation method and system based on color space conversion and shooting terminal
Xia et al. Color consistency correction based on remapping optimization for image stitching
CN113628134B (en) Image noise reduction method and device, electronic equipment and storage medium
CN110838088B (en) Multi-frame noise reduction method and device based on deep learning and terminal equipment
Hong et al. Panoramic image reflection removal
Lee et al. Correction of the overexposed region in digital color image
CN106296658B (en) A kind of scene light source estimation accuracy method for improving based on camera response function
Fu et al. Raw image based over-exposure correction using channel-guidance strategy
CN106295679B (en) A kind of color image light source colour estimation method based on category correction
Ferradans et al. Generation of HDR Images in Non-static Conditions based on Gradient Fusion.
Baba et al. Misaligned image integration with local linear model
CN113096033A (en) Low-illumination image enhancement method based on Retinex model self-adaptive structure

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant