Nothing Special   »   [go: up one dir, main page]

CN113506343B - Color coordinate estimation method, system, device and medium based on multi-source data - Google Patents

Color coordinate estimation method, system, device and medium based on multi-source data Download PDF

Info

Publication number
CN113506343B
CN113506343B CN202110658989.5A CN202110658989A CN113506343B CN 113506343 B CN113506343 B CN 113506343B CN 202110658989 A CN202110658989 A CN 202110658989A CN 113506343 B CN113506343 B CN 113506343B
Authority
CN
China
Prior art keywords
target
color
color coordinate
acquiring
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110658989.5A
Other languages
Chinese (zh)
Other versions
CN113506343A (en
Inventor
郭逸汀
熊佳
韩欣欣
李行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Semiconductor Nanjing Co Ltd
Original Assignee
Spreadtrum Semiconductor Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Semiconductor Nanjing Co Ltd filed Critical Spreadtrum Semiconductor Nanjing Co Ltd
Priority to CN202110658989.5A priority Critical patent/CN113506343B/en
Publication of CN113506343A publication Critical patent/CN113506343A/en
Application granted granted Critical
Publication of CN113506343B publication Critical patent/CN113506343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Spectrometry And Color Measurement (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a color coordinate estimation method, a color coordinate estimation system, color coordinate estimation equipment and a color coordinate estimation medium based on multi-source data, wherein the method comprises the following steps: acquiring target image data acquired by target image acquisition equipment under a target light source; acquiring target spectrum data acquired by target spectrum acquisition equipment under the target light source; and processing the target image data and the target spectrum data by utilizing a pre-trained color coordinate estimation network to obtain a target color coordinate corresponding to the target light source. The invention can realize accurate color coordinate estimation.

Description

Color coordinate estimation method, system, device and medium based on multi-source data
Technical Field
The invention relates to the technical field of image processing, in particular to a color coordinate estimation method, system, equipment and medium based on multi-source data.
Background
Accurate color restoration of captured images has been sought, which is closely related to automatic white balance AWB and color correction matrix CCM. The AWB is used for simulating the color constancy of a human eye visual system to restore the real color of an object, and the CCM is used for correcting color errors caused by color penetration among color blocks at the position of the filter plate, and both the color errors and the color block are related to color coordinate estimation.
Conventionally, automatic white balance and color correction are achieved by directly predicting AWB gain (also referred to as RGB gain) or CCM through a training network, which has the following disadvantages:
1. in practical application, image information has the possibility of unreliable data, for example, when overexposure or underexposure is performed, the image information is not complete, for the image, it is difficult to accurately restore the color of the image, and the debugging engineering quantity is large.
2. When the same device needs to serve different customer groups, the preferred RGB gain and CCM may also vary among different customer groups due to individual aesthetic differences, and some may vary greatly, and if the network is trained directly to predict RGB gain or CCM, the network may need to be retrained due to the difference in the ideal white balance or color correction effect required by the customer.
3. Due to the individual difference of the sensors, the RGB gain and the CCM of the same scene corresponding to different sensors also have difference, and the difficulty of labeling the RGB gain and the CCM is high when sample data is obtained, so that the requirement on network robustness is high.
In order to make the training network have wider applicability, higher reusability and lower sample data acquisition difficulty, a training color coordinate estimation network can be considered, so that color coordinate estimation is performed according to the network, and then the conversion relation between the color coordinate and RGB gain or CCM is finely adjusted according to personalized requirements.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, an object of the present invention is to provide a color coordinate estimation method, system, device and medium based on multi-source data, so as to accurately estimate color coordinates of image data, and further calculate self-RGB gain or CCM according to the color coordinates.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a color coordinate estimation method based on multi-source data, including:
acquiring target image data acquired by target image acquisition equipment under a target light source;
acquiring target spectrum data acquired by target spectrum acquisition equipment under the target light source;
and processing the target image data and the target spectrum data by utilizing a pre-trained color coordinate estimation network to obtain a target color coordinate corresponding to the target light source.
In a preferred embodiment of the present invention, the color coordinate estimation network includes:
a first convolutional neural network for extracting features of the target image data;
the second convolutional neural network is used for extracting the characteristics of the target spectrum data;
the fusion layer is used for fusing the characteristics of the target image data and the target spectrum data to obtain fusion characteristics;
a full connection layer for extracting the fusion features;
and the first output layer is used for acquiring the target color coordinates according to the fusion characteristics.
In a preferred embodiment of the present invention, the color coordinate estimation network further includes:
the second output layer is used for acquiring a first auxiliary color coordinate according to the characteristics of the target image data; and/or
And the third output layer is used for acquiring a second auxiliary color coordinate according to the characteristics of the target spectrum data.
In a preferred embodiment of the present invention, the training process of the color coordinate estimation network is as follows:
acquiring training image data acquired by a plurality of training image acquisition devices, wherein the types of the training image acquisition devices are consistent with that of the target image acquisition devices;
acquiring training spectrum data acquired by a plurality of training spectrum acquisition devices, wherein the types of the training spectrum acquisition devices are consistent with that of the target spectrum acquisition devices;
processing the training image data and the training spectrum data collected under the same light source by using a preset color coordinate estimation network to obtain a color coordinate prediction result of the light source;
calculating a network loss function according to the color coordinate prediction result and the color coordinate marking result of the corresponding light source;
and performing iterative training on the color coordinate estimation network according to the network loss function.
In a preferred embodiment of the present invention, before processing the target image data and the target spectral data using a pre-trained color coordinate estimation network, the method further comprises performing at least one of the following pre-processing:
correcting the target image data and the target spectrum data;
and dividing the target image data into a plurality of pixel blocks, and respectively carrying out normalization processing on each color component in each pixel block.
In a preferred embodiment of the present invention, after obtaining the target color coordinates, the method further comprises:
and acquiring a target color temperature corresponding to the target light source according to the target color coordinate.
In a preferred embodiment of the present invention, after obtaining the target color coordinates, the method further comprises:
and acquiring a target RGB gain or a target CCM corresponding to the target light source according to the target color coordinates.
In a preferred embodiment of the present invention, the obtaining a target RGB gain and/or a target CCM corresponding to the target light source according to the target color coordinates includes:
acquiring the brightness of the target light source;
acquiring a target color space corresponding to the brightness of the target light source, the target color space being divided into a plurality of color regions of a predetermined shape;
determining a target color area to which the target color coordinate belongs in the target color space;
and acquiring the target RGB gain and/or the target CCM corresponding to the target light source according to the predetermined RGB gain and/or CCM of each key point in the target color area and the position of the target color coordinate in the target color area.
In a preferred embodiment of the present invention, when the target color space is divided into polygonal color regions, the obtaining the target RGB gain and/or the target CCM corresponding to the target light source according to the predetermined RGB gain and/or CCM of each key point in the target color region and the position of the target color coordinate in the target color region includes:
acquiring RGB gain and/or CCM of each vertex in the target color region;
acquiring the weight corresponding to each vertex in the target color area according to the position of the target color coordinate in the target color area;
and acquiring a target RGB gain and/or a target CCM corresponding to the target light source according to the RGB gain and/or the CCM of each vertex in the target color region and the corresponding weight.
In a second aspect, the present invention provides a color coordinate estimation system based on multi-source data, including:
the target image acquisition module is used for acquiring target image data acquired by target image acquisition equipment under a target light source;
the target spectrum acquisition module is used for acquiring target spectrum data acquired by target spectrum acquisition equipment under the target light source;
and the color coordinate estimation module is used for processing the target image data and the target spectrum data by utilizing a pre-trained color coordinate estimation network to obtain a target color coordinate corresponding to the target light source.
In a preferred embodiment of the present invention, the color coordinate estimation network includes:
the first convolution neural network is used for extracting the characteristics of the target image data;
the second convolutional neural network is used for extracting the characteristics of the target spectrum data;
the fusion layer is used for fusing the characteristics of the target image data and the target spectrum data to obtain fusion characteristics;
a full connection layer for extracting the fusion features;
and the first output layer is used for acquiring the target color coordinates according to the fusion characteristics.
In a preferred embodiment of the present invention, the color coordinate estimation network further includes:
the second output layer is used for acquiring a first auxiliary color coordinate according to the characteristics of the target image data; and/or
And the third output layer is used for acquiring a second auxiliary color coordinate according to the characteristics of the target spectrum data.
In a preferred embodiment of the present invention, the system further includes a training module, and the training module is specifically configured to:
acquiring training image data acquired by a plurality of training image acquisition devices, wherein the types of the training image acquisition devices are consistent with that of the target image acquisition devices;
acquiring training spectrum data acquired by a plurality of training spectrum acquisition devices, wherein the types of the training spectrum acquisition devices are consistent with that of the target spectrum acquisition device;
processing the training image data and the training spectrum data collected under the same light source by using a preset color coordinate estimation network to obtain a color coordinate prediction result of the light source;
calculating a network loss function according to the color coordinate prediction result and the color coordinate marking result of the corresponding light source;
and performing iterative training on the color coordinate estimation network according to the network loss function.
In a preferred embodiment of the present invention, the system further comprises: a pre-processing module for performing at least one of the following pre-processing before processing the target image data and the target spectral data using a pre-trained color coordinate estimation network:
correcting the target image data and the target spectrum data;
dividing the target image data into a plurality of pixel blocks, and respectively carrying out normalization processing on each color component in each pixel block.
In a preferred embodiment of the present invention, the system further comprises:
and the color temperature acquisition module is used for acquiring the target color temperature corresponding to the target light source according to the target color coordinate.
In a preferred embodiment of the present invention, the system further comprises:
and the color reduction parameter acquisition module is used for acquiring a target RGB gain or a target CCM corresponding to the target light source according to the target color coordinate.
In a preferred embodiment of the present invention, the color restoration parameter obtaining module includes:
a brightness acquisition unit for acquiring brightness of the target light source;
a color space acquisition unit for acquiring a target color space corresponding to a luminance of the target light source, the target color space being divided into a plurality of color regions of a predetermined shape;
a target area obtaining unit, configured to determine, in the target color space, a target color area to which the target color coordinate belongs;
and the color recovery parameter acquisition unit is used for acquiring the target RGB gain and/or the target CCM corresponding to the target light source according to the predetermined RGB gain and/or CCM of each key point in the target color area and the position of the target color coordinate in the target color area.
In a preferred embodiment of the present invention, when the target color space is divided into polygonal color regions, the color reduction parameter obtaining unit is specifically configured to:
acquiring RGB gain and/or CCM of each vertex in the target color region;
acquiring the weight corresponding to each vertex in the target color area according to the position of the target color coordinate in the target color area;
and acquiring a target RGB gain and/or a target CCM corresponding to the target light source according to the RGB gain and/or the CCM of each vertex in the target color region and the corresponding weight.
In a third aspect, the invention provides a digital imaging apparatus comprising a system as claimed in any preceding claim.
In a fourth aspect, the present invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of the above when executing the computer program.
In a fifth aspect, the invention provides a computer-readable storage medium, on which a computer program is stored, which program, when executed by a processor, performs the steps of the method of any of the above.
By adopting the technical scheme, the invention has the following beneficial effects:
the color coordinate estimation method based on the spectral data of the target image combines two different data sources of the target image data and the target spectral data to carry out color coordinate estimation, increases effective information relative to a single data source, reduces adverse effects caused by unreliability of the single data source, and thus can improve the accuracy of color coordinate estimation. In addition, the network trained in the invention directly outputs the color coordinates instead of the RGB gain or CCM, and the RGB gain or CCM is obtained according to the color coordinates output by the network, so that a customer can finely adjust the conversion relation between the color coordinates and the RGB gain or CCM according to the requirement, thereby avoiding the problems that the network needs to be retrained due to the difference between the RGB gain and CCM preferred by the customer and the difficulty of marking the RGB gain and CCM when sample data is obtained, effectively reducing the debugging workload and accelerating the debugging progress.
Drawings
Fig. 1 is a flowchart of a color coordinate estimation method based on multi-source data according to embodiment 1 of the present invention;
fig. 2 is a schematic structural diagram of a color coordinate estimation network in embodiment 2 of the present invention;
FIG. 3 is a flowchart of training a color coordinate estimation network according to embodiment 2 of the present invention;
fig. 4 is a schematic structural diagram of a color coordinate estimation network in embodiment 3 of the present invention;
FIG. 5 is a flowchart of a color coordinate estimation method based on multi-source data according to embodiment 4 of the present invention;
FIG. 6 is a schematic diagram of a color space in embodiment 4 of the present invention;
FIG. 7 is a block diagram of a color coordinate estimation apparatus based on multi-source data according to embodiment 5 of the present invention;
fig. 8 is a schematic configuration diagram of a digital imaging apparatus of embodiment 6 of the present invention;
fig. 9 is a hardware patterning diagram of the electronic device according to embodiment 7 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
Example 1
The embodiment provides a color coordinate estimation method based on multi-source data, and as shown in fig. 1, the method specifically includes the following steps:
and S11, acquiring target image data acquired by target image acquisition equipment under a target light source.
For example, the target image acquisition device is a camera, and the target image data is a raw image acquired by the camera.
And S12, acquiring target spectrum data acquired by target spectrum acquisition equipment under the target light source.
For example, the target spectrum acquisition device is a spectrum sensor, and the target spectrum data is a spectrum acquired by the spectrum sensor.
And S13, preprocessing the target image data and the target spectrum data.
For example, in order to reduce the influence of individual differences between the image capturing device and the spectrum capturing device, the target image data and the target spectrum data are corrected. In particular, the method can be carried out according to data collected by a gold (standard) image collecting device and a gold spectrum collecting deviceAnd correcting according to data which is calibrated by the image acquisition equipment and the spectrum acquisition equipment and is related to the gold module. Taking image data as an example, assume that the module OTP of the target image acquisition device defines the RGB value of a certain light source as (R) current ,G current ,B current ) And determining the RGB value of the certain light source as (R) in the module OTP of the gold image acquisition equipment golden ,G golden ,B golden ) Then the target image data can be obtained by associating the RGB value of each pixel in the target image data with the corresponding RGB value
Figure BDA0003114540010000091
The multiplication realizes the correction.
In addition, the target image data can be divided into a plurality of pixel blocks, and each color component in each pixel block is subjected to normalization processing respectively so as to increase the running speed of a subsequent network.
And S14, processing the target image data and the target spectrum data by utilizing a pre-trained color coordinate estimation network to obtain a target color coordinate corresponding to the target light source.
Specifically, the target image data and the target spectrum data are input into a color coordinate estimation network trained in advance, and the color coordinate estimation network can output the target color coordinate corresponding to the target light source.
The color coordinate estimation is carried out by combining two different data sources of the target image data and the target spectrum data, effective information is increased relative to a single data source, and adverse effects caused by unreliable single data source are reduced, so that the accuracy of color coordinate estimation can be improved.
Preferably, after the target color coordinates are obtained, the target color temperature corresponding to the target light source can be obtained according to the target color coordinates.
Example 2
This example is a further modification of example 1.
Since the image data belongs to two-dimensional data, the spectrum data belongs to one-dimensional data, and the two-dimensional data belong to two-dimensional data, in order to fuse the two data, single data splicing cannot be completed, if the image data is converted into 1-dimensional data, the 2-dimensional information amount is lost, and if the spectrum data is converted into 2-dimensional data, wrong two-dimensional information is increased. Therefore, the present embodiment employs a multi-input color coordinate estimation network.
In the present embodiment, as shown in fig. 2, the color coordinate estimation network includes: the first convolution neural network is used for extracting the characteristics of the target image data; the second convolutional neural network is used for extracting the characteristics of the target spectrum data; the fusion layer is used for fusing the characteristics of the target image data and the target spectrum data to obtain fusion characteristics; a full connection layer for extracting the fusion features; and the first output layer is used for acquiring the target color coordinates according to the fusion characteristics. The first convolutional neural network and the second convolutional neural network respectively comprise a convolutional layer, a pooling layer, a flat layer and a full-connection layer which are sequentially arranged.
In this embodiment, as shown in fig. 3, the training process of the color coordinate estimation network is as follows:
and S21, acquiring training image data acquired by a plurality of training image acquisition devices.
In this embodiment, the training image acquisition device is also a camera. Due to the fact that different types of cameras are different in process, design and the like, the output results of the trained network are different, even the difference is large, and therefore the types of the training image acquisition equipment and the type of the target image acquisition equipment are consistent.
And S22, acquiring training spectrum data acquired by a plurality of training spectrum acquisition devices, wherein the types of the training spectrum acquisition devices are consistent with that of the target spectrum acquisition device.
In this embodiment, the training spectrum acquisition device is also a spectrum sensor. Due to the fact that different types of cameras are different in process, design and the like, the output results of the trained network are different, even the difference is large, and therefore the types of the training spectrum acquisition equipment and the target spectrum acquisition equipment are consistent.
And S23, processing the training image data and the training spectrum data collected under the same light source by using a preset color coordinate estimation network to obtain a color coordinate prediction result of the light source.
Specifically, corresponding training image data and training spectral data acquired under the same light source are input into a color coordinate estimation network, which outputs a color coordinate prediction result of the light source.
And S24, calculating a network loss function according to the color coordinate prediction result and the color coordinate marking result of the corresponding light source.
In this embodiment, when the training image data and the training spectrum data are collected, the color coordinate labeling result is obtained by measuring the color coordinate with a corresponding color coordinate measuring device (e.g., illuminometer). Specifically, the color coordinates may be xy color coordinates or u 'v' color coordinates, or ab color coordinates (Lab domain values calculated from xy color coordinates).
In this embodiment, the network loss function is used to characterize the deviation between the color coordinate prediction result output by the network and the corresponding color coordinate labeling result.
And S25, performing iterative training on the color coordinate estimation network according to the network loss function until a preset training termination condition is met.
Specifically, the training termination condition may be that the number of iterations reaches a predetermined number threshold, or that a loss function converges or is less than a predetermined loss threshold.
The color coordinate estimation network trained by the embodiment can fuse the input image data and the spectrum data, so that the accuracy of color coordinate estimation is improved.
In order to further improve the accuracy of the color coordinate estimation, the training image data and the training spectrum data may be preprocessed before performing step S23, wherein the preprocessing process refers to step S13.
And when a data source needs to be added, adding a convolutional neural network parallel to the first convolutional neural network and the second convolutional neural network in the color coordinate estimation network, and retraining according to the steps.
When the single camera and the multispectral sensor are upgraded to be the multiple cameras and the multispectral sensor by adding the cameras, retraining can be carried out on the newly added cameras without the need of carrying out the retraining on the newly added cameras, and only the steps of subsequent RGB gain and CCM acquisition are carried out on the added cameras. And (4) calculating the color coordinate by using the originally calibrated single camera and the multispectral sensor.
Example 3
This example is a further modification of example 2.
In this embodiment, as shown in fig. 4, the color coordinate estimation network is added to embodiment 2: the second output layer is used for acquiring a first auxiliary color coordinate according to the characteristics of the target image data; and the third output layer is used for acquiring a second auxiliary color coordinate according to the characteristics of the target spectrum data.
The color coordinate estimation network of this embodiment may train the depth features of the spectrum and the image together, or train regression and calculate the loss function for each type of feature separately, and there are three loss functions in total when step S24 is executed. The three loss functions are propagated reversely together to constrain the whole network, and the best effect is achieved during training. Since this structure has one main output layer and two auxiliary output layers, it is called a multi-output structure.
Specifically, when the color coordinate estimation network of the present embodiment is trained by using the foregoing steps S21 to S25, three color coordinate prediction results obtained in step S23 are: the first color coordinate prediction result output by the first output layer, the second color coordinate prediction result output by the second output layer and the third color coordinate prediction result output by the third output layer. Step S24 may obtain a first loss function, a second loss function, and a third loss function according to deviations of the first color coordinate prediction result, the second color coordinate prediction result, and the third color coordinate prediction result from the color coordinate labeling result, and then obtain a network loss function according to the three loss functions by calculating, for example, weighting the three loss functions to the network loss function. Then, step S25 may be performed to iteratively train the color coordinate estimation network according to the network loss function.
The color coordinate estimation network trained by the embodiment can more accurately realize color coordinate estimation.
Example 4
This example is a modification of the previous examples 1-3.
As shown in fig. 5, the color coordinate estimation method based on multi-source data of the present embodiment adds the following steps:
and S15, acquiring a target RGB gain or a target CCM corresponding to the target light source according to the target color coordinates so as to realize automatic white balance and color correction in the following process.
In this embodiment, a specific process of obtaining the corresponding target RGB gain and/or target CCM is as follows:
s151, obtaining the brightness of the target light source.
In the present embodiment, the brightness may be measured by a brightness measuring device, or may be acquired by an AE module.
S152, obtaining each target color space corresponding to the brightness of the target light source, the target color space being divided into a plurality of color regions of a predetermined shape.
In the present embodiment, color spaces at different luminances are generated in advance (the effective ranges of the color spaces may be different at different luminances), and each color space is divided into a number of small color regions in a predetermined shape, as shown in fig. 6. Preferably, only a part of the color space at typical luminances (e.g., 100lx, 500lx, 700lx, 1000lx, 1500lx, 2000 lx) needs to be acquired.
The color region shapes divided in the present embodiment are not limited to polygons (including triangles, rectangles), circles, and the like, but can be divided into only the same type of shapes in the same color space, but the sizes may be different. For example, a color space is divided into a plurality of triangular color regions, but the size of each side of a triangle may be completely different.
Assuming that the target light sources have luminances 800lx,800lx between luminance bins 700lx and 1000lx, the corresponding target color spaces are determined to be color spaces at luminances of 700lx and 1000 lx.
And S153, determining a target color area to which the target color coordinate belongs in each target color space.
As shown in fig. 6, assuming that the target color coordinate is O point and is located within a triangle Δ ABC in the target color space, Δ ABC is the target color region.
And S154, acquiring a target RGB gain and/or a target CCM corresponding to the target light source according to the predetermined RGB gain and/or CCM of each key point in each target color region and the position of the target color coordinate in the target color region.
For example, when the target color space is divided into polygonal color regions, step S154 includes:
s1541, obtaining RGB gain and/or CCM of each vertex in each target color region.
In this embodiment, the RGB gains and/or CCMs of each vertex in the target color region are determined in advance by calibration and adjustment.
S1542, according to the position of the target color coordinate in each target color region, obtaining the weight corresponding to each vertex in each target color region.
For example, when the target color region is a triangle, the weight corresponding to each vertex may be obtained according to the proportion of the target color coordinate within the triangle. Assuming that the color coordinates of the 3 vertices are a (x 1, y 1), B (x 2, y 2), and C (x 3, y 3), respectively, and the target color coordinate O is (x, y), the weights corresponding to the vertices may be calculated according to the euclidean distance as follows:
Figure BDA0003114540010000141
Figure BDA0003114540010000142
Figure BDA0003114540010000143
ratio 1 =weight dist1 /(weight dist1 +weight dist2 +weight dist3 )
ratio 2 =weight dist2 /(weight dist1 +weight dist2 +weight dist3 )
ratio 3 =1-ratio 1 -ratio 2
wherein, ratio 1 、ratio 2 、ratio 3 The weights corresponding to the vertices A, B, and C, respectively.
Further, the weight may be calculated by using the ratio of the intersection point of the oblique side and the straight line parallel to the corresponding side of 2 points of the vertexes a, B, and C, respectively, through the point O (x, y) on the oblique side. As shown in FIG. 6, the weight ratio corresponding to point A 1 = AD/AB, weight ratio corresponding to point B 2 = BF/BA, weight ratio corresponding to point C 3 =1-ratio 1 -ratio 2
S1543, according to the RGB gain and/or CCM of each vertex in each target color region and the corresponding weight, obtaining a target RGB gain and/or a target CCM corresponding to the target light source.
For example, assuming that the luminance of the target light source is 800lx, the corresponding target color spaces are determined to be color spaces at the luminances of 700lx and 1000lx, and the target color coordinate in the color space is the point O in Δ ABC at the luminance of 700lx, then the RGB gain and CCM of the target color coordinate in the color space at the luminance of 700lx are respectively:
AWB 700lx =AWB_gain1*ratio 1 +AWB_gain2*ratio 2 +AWB_gain3*ratio 3
CCM 700lx =CCM_matrix1*ratio 1 +CCM_matrix2*ratio 2 +CCM_matrix3*ratio 3
wherein, AWB _ gain1, AWB _ gain2, AWB _ gain3 are RGB gains of points a, B, C, respectively, and CCM _ matrix1, CCM _ matrix2, CCM _ matrix3 are CCMs of points a, B, C, respectively.
Similarly, the RGB gain and CCM of the target color coordinate in the color space at 1000lx luminance can be obtained as: AWB 1000lx And CCM 1000lx
Then, the corresponding weights of the 700lx and 1000lx luminances with respect to the 800lx luminance are calculated as follows:
Figure BDA0003114540010000151
Figure BDA0003114540010000152
finally, calculating the target RGB gain and/or the target CCM corresponding to the target light source according to the following formula:
AWB 800lx =AWB 700lx *Ratio 700lx +AWB 1000lx *Ratio 1000lx
CCM 800lx =CM 700lx *Ratio 700lx +CM 1000lx *Ratio 1000lx
wherein, AWB 800 Target RGB gain, CCM for target light source 800lx And the target CCM corresponding to the target light source.
It can be seen that, in this embodiment, instead of training the network to directly output the RGB gain or CCM, the RGB gain or CCM is obtained by using the color coordinates directly output by the network, and the client can perform fine tuning on the conversion relationship between the color coordinates and the RGB gain or CCM according to the requirement, so that the problems that the network needs to be retrained due to the difference between the RGB gain and the CCM preferred by the client and the difficulty of labeling the RGB gain and the CCM is high when acquiring sample data are avoided, and the debugging workload can be effectively reduced and the debugging progress is accelerated.
Example 5
The present embodiment provides a color coordinate estimation system based on multi-source data, as shown in fig. 7, the system 1 includes:
the target image acquisition module 11 is configured to acquire target image data acquired by a target image acquisition device under a target light source;
the target spectrum acquisition module 12 is configured to acquire target spectrum data acquired by the target spectrum acquisition device under the target light source;
and the color coordinate estimation module 13 is configured to process the target image data and the target spectrum data by using a pre-trained color coordinate estimation network to obtain a target color coordinate corresponding to the target light source.
Optionally, the color coordinate estimation network comprises:
the first convolution neural network is used for extracting the characteristics of the target image data;
the second convolutional neural network is used for extracting the characteristics of the target spectrum data;
the fusion layer is used for fusing the characteristics of the target image data and the target spectrum data to obtain fusion characteristics;
a full connection layer for extracting the fusion features;
and the first output layer is used for acquiring the target color coordinates according to the fusion characteristics.
Optionally, the color coordinate estimation network further comprises:
the second output layer is used for acquiring a first auxiliary color coordinate according to the characteristics of the target image data; and/or
And the third output layer is used for acquiring a second auxiliary color coordinate according to the characteristics of the target spectrum data.
Optionally, the system further includes a training module, and the training module is specifically configured to:
acquiring training image data acquired by a plurality of training image acquisition devices, wherein the types of the training image acquisition devices are consistent with that of the target image acquisition devices;
acquiring training spectrum data acquired by a plurality of training spectrum acquisition devices, wherein the types of the training spectrum acquisition devices are consistent with that of the target spectrum acquisition devices;
processing the training image data and the training spectrum data collected under the same light source by using a preset color coordinate estimation network to obtain a color coordinate prediction result of the light source;
calculating a network loss function according to the color coordinate prediction result and the color coordinate marking result of the corresponding light source;
and performing iterative training on the color coordinate estimation network according to the network loss function.
Optionally, the system further comprises: a pre-processing module for performing at least one of the following pre-processing prior to processing the target image data and the target spectral data using a pre-trained color coordinate estimation network:
correcting the target image data and the target spectrum data;
dividing the target image data into a plurality of pixel blocks, and respectively carrying out normalization processing on each color component in each pixel block.
Optionally, the system further comprises:
and the color temperature acquisition module is used for acquiring the target color temperature corresponding to the target light source according to the target color coordinate.
Optionally, the system further comprises:
and the color reduction parameter acquisition module is used for acquiring a target RGB gain or a target CCM corresponding to the target light source according to the target color coordinate.
Optionally, the color restoration parameter obtaining module includes:
a brightness acquisition unit for acquiring brightness of the target light source;
a color space acquisition unit for acquiring a target color space corresponding to a luminance of the target light source, the target color space being divided into a plurality of color regions of a predetermined shape;
a target area obtaining unit, configured to determine, in the target color space, a target color area to which the target color coordinate belongs;
and the color recovery parameter acquisition unit is used for acquiring the target RGB gain and/or the target CCM corresponding to the target light source according to the predetermined RGB gain and/or CCM of each key point in the target color area and the position of the target color coordinate in the target color area.
Optionally, when the target color space is divided into polygonal color regions, the color restoration parameter obtaining unit is specifically configured to:
acquiring RGB gain and/or CCM of each vertex in the target color region;
acquiring the weight corresponding to each vertex in the target color area according to the position of the target color coordinate in the target color area;
and acquiring a target RGB gain and/or a target CCM corresponding to the target light source according to the RGB gain and/or the CCM of each vertex in the target color region and the corresponding weight.
The color coordinate estimation is carried out by combining two different data sources of the target image data and the target spectrum data, so that effective information is increased relative to a single data source, and adverse effects caused by unreliability of the single data source are reduced, thereby improving the accuracy of the color coordinate estimation. In addition, the network trained in the invention directly outputs the color coordinates instead of the RGB gain or CCM, and the RGB gain or CCM is obtained according to the color coordinates output by the network, so that a customer can finely adjust the conversion relation between the color coordinates and the RGB gain or CCM according to the requirement, thereby avoiding the problems that the network needs to be retrained due to the difference between the RGB gain and CCM preferred by the customer and the difficulty of marking the RGB gain and CCM when sample data is obtained, effectively reducing the debugging workload and accelerating the debugging progress.
For the system embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above described system embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the present invention. One of ordinary skill in the art can understand and implement without inventive effort.
Example 6
The present embodiment provides a digital imaging apparatus, as shown in fig. 8, which includes the color coordinate estimation system 1 described in embodiment 5, and further includes an image capturing apparatus 2 and a spectrum capturing apparatus 3 communicatively connected to the color coordinate estimation system.
Example 7
Fig. 9 is a schematic diagram of an electronic device according to an exemplary embodiment of the present invention, and illustrates a block diagram of an exemplary electronic device 60 suitable for implementing embodiments of the present invention. The electronic device 60 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 6, the electronic device 60 may be embodied in the form of a general purpose computing device, which may be, for example, a server device. The components of the electronic device 60 may include, but are not limited to: the at least one processor 61, the at least one memory 62, and a bus 63 connecting the various system components (including the memory 62 and the processor 61).
The bus 63 includes a data bus, an address bus, and a control bus.
The memory 62 may include volatile memory, such as Random Access Memory (RAM) 621 and/or cache memory 622, and may further include Read Only Memory (ROM) 623.
The memory 62 may also include a program tool 625 (or utility tool) having a set (at least one) of program modules 624, such program modules 624 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The processor 61 executes various functional applications and data processing, such as the methods provided by any of the above embodiments, by running a computer program stored in the memory 62.
The electronic device 60 may also communicate with one or more external devices 64 (e.g., keyboard, pointing device, etc.). Such communication may be through an input/output (I/O) interface 65. Also, the model-generating electronic device 60 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via a network adapter 66. As shown, network adapter 66 communicates with the other modules of model-generating electronic device 60 via bus 63. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the model-generating electronic device 60, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, and data backup storage systems, etc.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the electronic device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Example 8
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method provided by any of the above embodiments.
More specific examples, among others, that the readable storage medium may employ may include, but are not limited to: a portable disk, hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible implementation, the present disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps of implementing the method provided by any of the above embodiments when the program product is run on the terminal device.
Where program code for executing the present disclosure is written in any combination of one or more programming languages, the program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device, partly on a remote device or entirely on the remote device.
While specific embodiments of the disclosure have been described above, it will be understood by those skilled in the art that this is by way of illustration only, and that the scope of the disclosure is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of this disclosure, and these changes and modifications are intended to be within the scope of this disclosure.

Claims (17)

1. A color coordinate estimation method based on multi-source data is characterized by comprising the following steps:
acquiring target image data acquired by target image acquisition equipment under a target light source;
acquiring target spectrum data acquired by target spectrum acquisition equipment under the target light source;
processing the target image data and the target spectrum data by utilizing a pre-trained color coordinate estimation network to obtain a target color coordinate corresponding to the target light source;
the training process of the color coordinate estimation network is as follows:
acquiring training image data acquired by a plurality of training image acquisition devices, wherein the types of the training image acquisition devices are consistent with that of the target image acquisition devices;
acquiring training spectrum data acquired by a plurality of training spectrum acquisition devices, wherein the types of the training spectrum acquisition devices are consistent with that of the target spectrum acquisition device;
processing the training image data and the training spectrum data collected under the same light source by using a preset color coordinate estimation network to obtain a color coordinate prediction result of the light source;
calculating a network loss function according to the color coordinate prediction result and the color coordinate marking result of the corresponding light source;
and performing iterative training on the color coordinate estimation network according to the network loss function.
2. The color coordinate estimation method according to claim 1, wherein the color coordinate estimation network comprises:
a first convolutional neural network for extracting features of the target image data;
the second convolutional neural network is used for extracting the characteristics of the target spectrum data;
the fusion layer is used for fusing the characteristics of the target image data and the target spectrum data to obtain fusion characteristics;
a full connection layer for extracting the fusion features;
and the first output layer is used for acquiring the target color coordinates according to the fusion characteristics.
3. The color coordinate estimation method according to claim 2, wherein the color coordinate estimation network further comprises:
the second output layer is used for acquiring a first auxiliary color coordinate according to the characteristics of the target image data; and/or
And the third output layer is used for acquiring a second auxiliary color coordinate according to the characteristics of the target spectrum data.
4. The color coordinate estimation method of claim 1, wherein prior to processing the target image data and the target spectral data with a pre-trained color coordinate estimation network, the method further comprises performing at least one of the following pre-processing:
correcting the target image data and the target spectrum data;
and dividing the target image data into a plurality of pixel blocks, and respectively carrying out normalization processing on each color component in each pixel block.
5. The color coordinate estimation method according to claim 1, wherein after obtaining the target color coordinates, the method further comprises:
and acquiring a target color temperature corresponding to the target light source according to the target color coordinate.
6. The color coordinate estimation method according to claim 1, wherein after obtaining the target color coordinates, the method further comprises:
and acquiring a target RGB gain or a target CCM corresponding to the target light source according to the target color coordinate.
7. The method according to claim 6, wherein the obtaining the target RGB gain and/or the target CCM corresponding to the target light source according to the target color coordinates comprises:
acquiring the brightness of the target light source;
acquiring a target color space corresponding to the brightness of the target light source, the target color space being divided into a plurality of color regions of a predetermined shape;
determining a target color area to which the target color coordinate belongs in the target color space;
and acquiring the target RGB gain and/or the target CCM corresponding to the target light source according to the predetermined RGB gain and/or CCM of each key point in the target color area and the position of the target color coordinate in the target color area.
8. The method according to claim 7, wherein when the target color space is divided into polygonal color regions, the obtaining a target RGB gain and/or a target CCM corresponding to the target light source according to the predetermined RGB gain and/or CCM of each key point in the target color region and the position of the target color coordinate in the target color region comprises:
acquiring RGB gain and/or CCM of each vertex in the target color area;
acquiring the weight corresponding to each vertex in the target color area according to the position of the target color coordinate in the target color area;
and acquiring a target RGB gain and/or a target CCM corresponding to the target light source according to the RGB gain and/or the CCM of each vertex in the target color area and the corresponding weight.
9. A color coordinate estimation system based on multi-source data, comprising:
the target image acquisition module is used for acquiring target image data acquired by target image acquisition equipment under a target light source;
the target spectrum acquisition module is used for acquiring target spectrum data acquired by target spectrum acquisition equipment under the target light source;
the color coordinate estimation module is used for processing the target image data and the target spectrum data by utilizing a pre-trained color coordinate estimation network to obtain a target color coordinate corresponding to the target light source;
the system further comprises a training module, the training module being specifically configured to:
acquiring training image data acquired by a plurality of training image acquisition devices, wherein the types of the training image acquisition devices are consistent with that of the target image acquisition devices;
acquiring training spectrum data acquired by a plurality of training spectrum acquisition devices, wherein the types of the training spectrum acquisition devices are consistent with that of the target spectrum acquisition devices;
processing the training image data and the training spectrum data collected under the same light source by using a preset color coordinate estimation network to obtain a color coordinate prediction result of the light source;
calculating a network loss function according to the color coordinate prediction result and the color coordinate marking result of the corresponding light source;
and performing iterative training on the color coordinate estimation network according to the network loss function.
10. The color coordinate estimation system of claim 9, further comprising: a pre-processing module for performing at least one of the following pre-processing before processing the target image data and the target spectral data using a pre-trained color coordinate estimation network:
correcting the target image data and the target spectral data;
and dividing the target image data into a plurality of pixel blocks, and respectively carrying out normalization processing on each color component in each pixel block.
11. The color coordinate estimation system of claim 9, further comprising:
and the color temperature acquisition module is used for acquiring the target color temperature corresponding to the target light source according to the target color coordinate.
12. The color coordinate estimation system of claim 9, further comprising:
and the color reduction parameter acquisition module is used for acquiring a target RGB gain or a target CCM corresponding to the target light source according to the target color coordinate.
13. The color coordinate estimation system according to claim 12, wherein the color restoration parameter acquisition module includes:
a brightness acquisition unit for acquiring brightness of the target light source;
a color space acquisition unit for acquiring a target color space corresponding to a luminance of the target light source, the target color space being divided into a plurality of color regions of a predetermined shape;
a target area obtaining unit, configured to determine, in the target color space, a target color area to which the target color coordinate belongs;
and the color restoration parameter acquisition unit is used for acquiring the target RGB gain and/or the target CCM corresponding to the target light source according to the predetermined RGB gain and/or CCM of each key point in the target color area and the position of the target color coordinate in the target color area.
14. The color coordinate estimation system according to claim 13, wherein when the target color space is divided into polygonal color regions, the color restoration parameter acquisition unit is specifically configured to:
acquiring RGB gain and/or CCM of each vertex in the target color region;
acquiring the weight corresponding to each vertex in the target color area according to the position of the target color coordinate in the target color area;
and acquiring a target RGB gain and/or a target CCM corresponding to the target light source according to the RGB gain and/or the CCM of each vertex in the target color area and the corresponding weight.
15. A digital imaging device comprising a system as claimed in any one of claims 9 to 14.
16. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 8 when executing the computer program.
17. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202110658989.5A 2021-06-15 2021-06-15 Color coordinate estimation method, system, device and medium based on multi-source data Active CN113506343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110658989.5A CN113506343B (en) 2021-06-15 2021-06-15 Color coordinate estimation method, system, device and medium based on multi-source data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110658989.5A CN113506343B (en) 2021-06-15 2021-06-15 Color coordinate estimation method, system, device and medium based on multi-source data

Publications (2)

Publication Number Publication Date
CN113506343A CN113506343A (en) 2021-10-15
CN113506343B true CN113506343B (en) 2022-11-18

Family

ID=78009712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110658989.5A Active CN113506343B (en) 2021-06-15 2021-06-15 Color coordinate estimation method, system, device and medium based on multi-source data

Country Status (1)

Country Link
CN (1) CN113506343B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114423126B (en) * 2022-03-16 2022-06-24 深圳市爱图仕影像器材有限公司 Light modulation method and device of light emitting device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4006986B2 (en) * 2001-12-03 2007-11-14 凸版印刷株式会社 Color material color gamut calculation method, color reproduction determination method, color material combination ratio calculation method, color material color gamut calculation device, color reproduction determination device, and color material combination ratio calculation device
KR20120119717A (en) * 2011-04-22 2012-10-31 삼성디스플레이 주식회사 Image display device and color correction method thereof
EP2672718A1 (en) * 2012-06-07 2013-12-11 Thomson Licensing Color calibration of an image capture device in a way that is adaptive to the scene to be captured
EP3449628B1 (en) * 2016-04-25 2022-12-14 Zhejiang Dahua Technology Co., Ltd Methods, systems, and media for image white balance adjustment
CN109506781B (en) * 2018-12-25 2020-10-13 武汉精立电子技术有限公司 Chrominance measuring method and device

Also Published As

Publication number Publication date
CN113506343A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
US10205896B2 (en) Automatic lens flare detection and correction for light-field images
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
JP6027159B2 (en) Image blurring method and apparatus, and electronic apparatus
US9444991B2 (en) Robust layered light-field rendering
CN111340077B (en) Attention mechanism-based disparity map acquisition method and device
WO2018228310A1 (en) Image processing method and apparatus, and terminal
CN107370910B (en) Minimum surround based on optimal exposure exposes set acquisition methods
CN115018920B (en) Camera array calibration method and device, electronic equipment and storage medium
WO2024104365A1 (en) Device temperature measurement method and related device
CN116519106B (en) Method, device, storage medium and equipment for determining weight of live pigs
CN113506343B (en) Color coordinate estimation method, system, device and medium based on multi-source data
JP2019096222A (en) Image processor, method for processing image, and computer program
US20230328396A1 (en) White balance correction method and apparatus, device, and storage medium
CN110111341B (en) Image foreground obtaining method, device and equipment
US8781173B2 (en) Computing high dynamic range photographs
CN115100294A (en) Method, device and device for event camera calibration based on straight line feature
CN105989571A (en) Control of computer vision pre-processing based on image matching using structural similarity
WO2020179276A1 (en) Image processing device, image processing method, and program
JP2015179426A (en) Information processing apparatus, parameter determination method, and program
US20230401737A1 (en) Method for training depth estimation model, training apparatus, and electronic device applying the method
CN116664694A (en) Training method of image brightness acquisition model, image acquisition method and mobile terminal
CN114972100A (en) Noise model estimation method and device, and image processing method and device
CN114764833A (en) Plant growth curve determination method and device, electronic equipment and medium
CN113920196A (en) Visual positioning method, device and computer equipment
TWI755250B (en) Method for determining plant growth curve, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant