Nothing Special   »   [go: up one dir, main page]

CN113079319B - Image adjusting method and related equipment thereof - Google Patents

Image adjusting method and related equipment thereof Download PDF

Info

Publication number
CN113079319B
CN113079319B CN202110372884.3A CN202110372884A CN113079319B CN 113079319 B CN113079319 B CN 113079319B CN 202110372884 A CN202110372884 A CN 202110372884A CN 113079319 B CN113079319 B CN 113079319B
Authority
CN
China
Prior art keywords
image
scene
determining
target
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110372884.3A
Other languages
Chinese (zh)
Other versions
CN113079319A (en
Inventor
刘松涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Tuya Information Technology Co Ltd
Original Assignee
Hangzhou Tuya Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Tuya Information Technology Co Ltd filed Critical Hangzhou Tuya Information Technology Co Ltd
Priority to CN202110372884.3A priority Critical patent/CN113079319B/en
Publication of CN113079319A publication Critical patent/CN113079319A/en
Application granted granted Critical
Publication of CN113079319B publication Critical patent/CN113079319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses an image adjusting method and related equipment thereof, which are applied to image acquisition equipment and used for optimizing image frames so as to realize self-adaptation between images and scenes, wherein the method comprises the following steps: acquiring a first image frame acquired by the image acquisition equipment in a current scene; determining an image feature of the first image frame; determining the target scene type of the current scene according to the image characteristics; obtaining image optimization parameters corresponding to the target scene type; and optimizing a second image frame acquired by the image acquisition equipment under the current scene by using the image optimization parameters to obtain an optimized target image frame, wherein the first image frame is different from the second image frame.

Description

Image adjusting method and related equipment thereof
Technical Field
The embodiment of the application relates to the field of image processing, in particular to an image adjusting method and related equipment.
Background
The network camera is widely applied to different scenes. In the prior art, the image effect of the image frames output by the network camera cannot be changed due to the change of the scene, or the image effect of the image frames output by the network camera is the image effect output by the same standard in different scenes, which obviously causes that the image effect of the image frames output by the network camera in different scenes cannot actually reflect the scene effect, so that the user requirements cannot be met, for example, after the network camera changes from the previous scene to the new scene, the image effect of the image frames deviates, so that the user requirements cannot be met.
Therefore, how to apply the network camera to different scenes is an urgent problem to be solved, and the network camera can still normally output the image frame image effect matched with the scenes.
Disclosure of Invention
The embodiment of the application provides an image adjusting method and related equipment thereof, which are used for solving the problem that an image effect is not matched with a scene and realizing the self-adaption of the scene and the image effect.
A first implementation manner of the first aspect of the embodiment of the present application is applied to an image capturing device, and includes:
acquiring a first image frame acquired by the image acquisition equipment in a current scene;
determining an image feature of the first image frame;
determining the target scene type of the current scene according to the image characteristics;
obtaining image optimization parameters corresponding to the target scene type;
and optimizing a second image frame acquired by the image acquisition equipment under the current scene by using the image optimization parameters to obtain an optimized target image frame, wherein the first image frame is different from the second image frame.
In combination with the first implementation manner of the first aspect of the embodiment of the present application, in the second implementation manner of the first aspect of the embodiment of the present application, the image feature includes a color feature, and then the method includes:
dividing the first image frame into a first block map of m blocks;
counting the R value, the G value and the B value of the first block diagram;
determining a cluster center value of the first image frame and a number of drop points associated with the cluster center value according to a clustering algorithm and an R value, a G value and a B value of the first block map;
determining a color temperature value corresponding to the clustering center value;
determining the color temperature value and the number of drop points as color features of the first image frame.
With reference to the first implementation manner of the first aspect of the embodiment of the present application or the second implementation manner of the first aspect of the embodiment of the present application, in a third implementation manner of the first aspect of the embodiment of the present application, the image feature includes a luminance feature, and then the method includes:
segmenting the first image frame into a second block map of n blocks;
counting the brightness value of the second block map;
counting the mean brightness value and standard brightness difference of the first image frame according to the brightness value of the second block map, determining a high brightness block map and a low brightness block map from the n second block maps according to a preset rule, and counting the number of the high brightness block maps and the number of the low brightness block maps;
determining the luminance mean, the luminance standard deviation, the number of high luminance block maps, and the number of low luminance block maps as the luminance characteristics of the first image frame.
With reference to the third implementation manner of the first aspect of the example of the present application, the fourth implementation manner of the first aspect of the example of the present application includes:
determining a target first scene associated with the color features in a first preset relationship, wherein the first preset relationship is a corresponding relationship between a plurality of first scenes and a plurality of color features;
determining a target second scene associated with the brightness features in a second preset relationship, wherein the second preset relationship is a corresponding relationship between a plurality of target second scenes and a plurality of brightness features;
determining the target first scene and the target second scene as the target scene type.
In combination with the fourth implementation manner of the first aspect of the embodiment of the present application, in the fifth implementation manner of the first aspect of the embodiment of the present application, the image optimization parameter includes any one or more of a code rate, a sharpness, a contrast, a color saturation, a denoising strength, and a white balance.
With reference to the fifth implementation manner of the first aspect of the embodiments of the present application, the sixth implementation manner of the first aspect of the embodiments of the present application includes:
if the target first scene of the target scene type is an outdoor scene and the target second scene of the target scene type is a high color temperature scene, acquiring code rate, sharpness, contrast and color saturation associated with the outdoor scene and acquiring white balance parameters associated with the high color temperature scene;
if the target first scene of the target scene type is an indoor scene and the target second scene of the target scene type is a mixed color temperature scene, acquiring code rate, sharpness and denoising intensity associated with the indoor scene and acquiring white balance parameters associated with the mixed color temperature scene.
In a first implementation manner of the second aspect of the embodiments of the present application, an image capturing apparatus is provided, which includes:
the acquisition unit is used for acquiring a first image frame acquired by the image acquisition equipment in a current scene;
a first determining unit for determining an image feature of the first image frame;
the second determining unit is used for determining the target scene type of the current scene according to the image characteristics;
the obtaining unit is used for obtaining image optimization parameters corresponding to the target scene type;
and the optimization unit is used for optimizing a second image frame acquired by the image acquisition equipment in the current scene by using the image optimization parameters to obtain an optimized target image frame, wherein the first image frame is different from the second image frame.
With reference to the first implementation manner of the second aspect of the embodiments of the present application, in a second implementation manner of the second aspect of the embodiments of the present application, if the image feature includes a color feature, the first determining unit includes:
a first segmentation subunit configured to segment the first image frame into a first segmentation map of m blocks;
a first statistical subunit, configured to perform statistics on an R value, a G value, and a B value of the first tile map;
a first determining subunit, configured to determine, according to a clustering algorithm and an R value, a G value, and a B value of the first tile map, a cluster center value of the first image frame and a number of drop points associated with the cluster center value;
the second determining subunit is used for determining the color temperature value corresponding to the clustering center value;
a third determining subunit, configured to determine the color temperature value and the number of the falling points as the color feature of the first image frame.
With reference to the first implementation manner of the second aspect of the embodiment of the present application or the second implementation manner of the second aspect of the present application, in a third implementation manner of the second aspect of the embodiment of the present application, if the image feature includes a luminance feature, the first determining unit includes:
a second dividing subunit, configured to divide the first image frame into n second block maps;
the second statistical subunit is used for counting the brightness value of the second block map;
a third counting subunit, configured to count a luminance mean value and a luminance standard deviation of the first image frame according to the luminance values of the second block maps, determine a high-luminance block map and a low-luminance block map from the n second block maps according to a preset rule, and count the number of the high-luminance block maps and the number of the low-luminance block maps;
a fourth determining subunit, configured to determine the luminance mean, the luminance standard deviation, the number of high-luminance block maps, and the number of low-luminance block maps as the luminance characteristic of the first image frame.
With reference to the third implementation manner of the second aspect of the embodiment of the present application and the fourth implementation manner of the second aspect of the embodiment of the present application, the second determining unit includes:
a fifth determining subunit, configured to determine, in a first preset relationship, a target first scene associated with the color feature, where the first preset relationship is a correspondence relationship between a plurality of first scenes and a plurality of color features;
a sixth determining subunit, configured to determine a target second scene associated with the brightness feature in a second preset relationship, where the second preset relationship is a correspondence relationship between multiple target second scenes and multiple brightness features;
a seventh determining subunit, configured to determine the target first scene and the target second scene as the target scene type.
In combination with the fourth implementation manner of the second aspect of the embodiments of the present application, in the fifth implementation manner of the second aspect of the embodiments of the present application, the image optimization parameters include any one or more of a code rate, a sharpness, a contrast, a color saturation, a denoising strength, and a white balance.
In combination with the fifth implementation manner of the second aspect of the embodiment of the present application, or the sixth implementation manner of the second aspect of the embodiment of the present application, the obtaining unit includes:
a first obtaining subunit, configured to, if a target first scene of the target scene type is an outdoor scene and a target second scene of the target scene type is a high color temperature scene, obtain a code rate, a sharpness, a contrast, and a color saturation associated with the outdoor scene, and obtain a white balance parameter associated with the high color temperature scene;
a second obtaining subunit, configured to, if the target first scene of the target scene type is an indoor scene, and the target second scene of the target scene type is a mixed color temperature scene, obtain a code rate, a sharpness, and a denoising intensity associated with the indoor scene, and obtain a white balance parameter associated with the mixed color temperature scene.
In a third aspect of the embodiments of the present application, there is provided an image capturing apparatus, including:
the system comprises a central processing unit, a memory, an input/output interface, a wired or wireless network interface and a power supply;
the memory is a transient storage memory or a persistent storage memory;
the central processor is configured to communicate with the memory, and to execute the instruction operations in the memory on an image capture device to perform any one of the embodiments of the first aspect.
In a fourth aspect of the embodiments of the present application, a computer-readable storage medium is provided, where a program is stored in the computer-readable storage medium, and when the computer executes the program, the computer executes any one of the foregoing embodiments of the first aspect.
In a fifth aspect of the embodiments of the present application, there is provided a computer program product, when the computer program product is executed on a computer, the computer executes the implementation manner of any one of the foregoing first aspects.
According to the technical scheme, the embodiment of the application has the following advantages:
according to the technical scheme, after the image acquisition equipment acquires the first image frame of the current scene, the image characteristics of the first image frame are determined, the target scene type of the current scene is further determined according to the image characteristics, the image optimization parameters of the target scene type are further determined, the second image frame under the current scene is optimized through the image optimization parameters, and the optimized target image frame is obtained, wherein the first image frame is different from the second image frame. Compared with the prior art, the image acquisition equipment in the technical scheme can optimize the acquired second image frame as the target image frame based on the acquired first image frame, so that the problem that the camera in the prior art cannot output the image effect matched with the scene in different scenes is solved.
Drawings
FIG. 1 is a schematic flow chart illustrating an image adjustment method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating an image adjustment method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating an image adjustment method according to an embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating an image adjustment method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating an image adjustment method according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of an image capturing device in an embodiment of the present application;
fig. 7 is a schematic structural diagram of another image capturing apparatus in an embodiment of the present application.
Detailed Description
The embodiment of the application provides an image adjusting method and related equipment thereof, which are used for optimizing an image frame so as to realize self-adaptation between an image and a scene.
Referring to fig. 1, an embodiment of an image adjusting method according to the present application includes:
101. acquiring a first image frame acquired by the image acquisition equipment in a current scene;
an image capture device acquires a first image frame in a capture current scene. The image capturing device may be a web camera, a surveillance camera, or a mobile camera, and is not limited herein.
102. Determining an image feature of the first image frame;
determining an image feature for the acquired first image frame, where the image feature may be a brightness feature or a color feature, and is not limited herein.
It should be noted that the first image frame is obtained by capturing the first image frame in the current scene, so that the image feature of the first image frame has the feature attribute of the current scene, or the image feature of the first image frame can follow the feature attribute reflecting the current scene.
103. Determining the target scene type of the current scene according to the image characteristics;
because the image acquisition equipment stores preset relations between different image characteristics and different scene types, after the image acquisition equipment determines the image characteristics associated with the current scene, the target scene type matched with the current scene can be determined according to the preset relations.
104. Obtaining image optimization parameters corresponding to the target scene type;
the image acquisition device acquires image optimization parameters corresponding to the target scene type, where the optimization parameters may include a code rate, a sharpness, a contrast, a color saturation, a denoising intensity, and a white balance, and are not limited herein.
It should be noted that, due to the change of the current scene, there may be a difference in the image optimization parameters of different current scenes.
105. And optimizing a second image frame acquired by the image acquisition equipment under the current scene by using the image optimization parameters to obtain an optimized target image frame, wherein the first image frame is different from the second image frame.
And the image acquisition equipment optimizes the second image frame acquired under the current scene by using the image optimization parameters so as to obtain the optimized target image frame.
The first image frame and the second image frame may be the same or different, and it should be noted that, in the case where the first image frame and the second image frame are the same, it is known that the image acquisition device performs individual optimization on each acquired image frame according to its own image characteristics, thereby improving the optimization quality; for the case that the first image frame is different from the second image frame, it is known that the image acquisition device will only optimize the second image frame using the image optimization parameters associated with the first image frame, which generally is a certain image frame ordered chronologically after the first image frame, improving the optimization speed.
In this embodiment, after determining the target scene type corresponding to the first image frame, the image acquisition device optimizes the second image frame in the current scene by using the image optimization parameter associated with the target scene type, so as to obtain an optimized target image frame, thereby solving the problem that in the prior art, the image effect of the image frame cannot be adjusted according to the change of the scene, and thus realizing the self-adaptation between the image effect and the scene.
Based on the embodiment of fig. 1, the present embodiment describes how to determine the color feature of the first image frame when the image feature is a color feature, please refer to fig. 2, and the present embodiment includes:
201. dividing the first image frame into a first block map of m blocks;
after the image acquisition equipment acquires the first image frame, the first image frame is divided into m first block maps.
Illustratively, the image acquisition device segments the first image frame into a total of 1024 blocks of the first tile map of 32 x 32.
202. Counting the R value, the G value and the B value of the first block diagram;
the image acquisition device counts R, G, B values of each of the m first block maps, wherein the range of the R, G, B values is [0,255].
203. Determining a cluster center value of the first image frame and a number of drop points associated with the cluster center value according to a clustering algorithm and an R value, a G value and a B value of the first block map;
after the image acquisition equipment acquires the R, G and B values of each first block diagram, the R/G and B/G values of each first block diagram are determined, and then the R/G and B/G values of m first block diagrams are calculated according to a K-means clustering algorithm to obtain N clustering centers, wherein N is more than or equal to 3. And determining the number of the falling points corresponding to each cluster center, wherein the number of the falling points is also the number of the first block diagram associated with each cluster center, and the weight of each cluster center can be determined through the number of the falling points. And finally, determining 2 clustering centers with the maximum weight according to the weight of each of the N clustering centers, wherein the R/G and B/G values corresponding to the 2 clustering centers are the clustering center values.
204. Determining a color temperature value corresponding to the clustering center value;
and the image acquisition equipment determines respective color temperature values of the 2 clustering center values through the clustering center values and the color temperature curves of the 2 clustering centers.
205. Determining the color temperature value and the number of drop points as color features of the first image frame.
The image acquisition device determines respective color temperature values and corresponding drop point numbers of the 2 cluster center values as color features of the first image frame.
Based on the embodiment of fig. 2, the present embodiment describes how to determine the brightness characteristic of the first image frame when the image characteristic is a brightness characteristic, please refer to fig. 3, which includes:
301. dividing the first image frame into a second block map of n blocks;
after the image acquisition equipment acquires the first image frame, the first image frame is divided into n second block maps.
Illustratively, the image acquisition device segments the first image frame into a second block map of 255 blocks total of 17 x 15.
302. Counting the brightness value of the second block map;
the image acquisition device counts the brightness value of each second block map in the n second block maps, and the value range of the brightness value is [0,1023].
303. Counting the brightness mean value and the brightness standard deviation of the first image frame according to the brightness value of the second block diagram, determining a high-brightness block diagram and a low-brightness block diagram from the n second block diagrams according to a preset rule, and counting the number of the high-brightness block diagrams and the number of the low-brightness block diagrams;
after the brightness value of each second block image is determined, the image acquisition equipment calculates the brightness average value of the n second block images, and therefore the brightness standard deviation is calculated. And determining the high-brightness block images and the low-brightness block images from the middle of the second blocks of the n blocks according to a preset rule, and further counting the number of all the high-brightness block images and the number of all the low-brightness block images.
The luminance interval of the high luminance block map is greater than or equal to 800, and the luminance interval of the low luminance block map is less than or equal to 50, which is not limited herein.
304. Determining the brightness mean, the brightness standard deviation, the number of high brightness block maps and the number of low brightness block maps as the brightness characteristic of the first image frame.
The image acquisition device determines the brightness mean, the brightness standard deviation, the number of high-brightness block maps and the number of low-brightness block maps as the brightness characteristics of the first image frame.
Based on fig. 4, the present embodiment describes that determining the target scene type of the current scene through the luminance feature and the color feature of the first image frame includes:
401. determining a target first scene associated with the color features in a first preset relation, wherein the first preset relation is a corresponding relation between a plurality of first scenes and a plurality of color features;
the image acquisition equipment stores a preset first preset relation, the first preset relation is a corresponding relation between different color characteristics and different first scenes, and after the image acquisition equipment determines the color characteristics, a target first scene associated with the color characteristics is determined in the first preset relation.
Illustratively, if the maximum color temperature value of the color feature is greater than 6000 and the number of the falling points corresponding to the maximum color temperature value is greater than 500, the target first scene is determined to be a large-area high color temperature scene corresponding to a large-area sky scene; if the maximum color temperature value of the color feature is smaller than 4500 and larger than 4000 and the number of the falling points corresponding to the maximum color temperature value is larger than 500, determining the target first scene as a large-area green scene, such as a large-area grassland or large-area forest scene; if the difference between the two color temperature values of the color feature is greater than 1500, and the number of the drop points corresponding to the two color temperature values is greater than 300, the target first scene is determined to be a large mixed color temperature scene, such as a scene of mixed illumination of a yellow light lamp and a white light lamp, and a colorful indoor scene.
402. Determining a target second scene associated with the brightness features in a second preset relationship, wherein the second preset relationship is a corresponding relationship between a plurality of target second scenes and a plurality of brightness features;
the image acquisition equipment stores a preset second preset relation, the second preset relation is a corresponding relation between different brightness characteristics and different second scenes, and after the brightness characteristics are determined by the image acquisition equipment, a target second scene associated with the brightness characteristics is determined in the second preset relation.
Illustratively, if the brightness mean of the brightness feature is greater than 500 and the standard deviation of brightness is greater than 200 or less than 100, the target second scene is an outdoor scene; if the brightness mean of the brightness features is less than 400 and the brightness standard deviation is greater than 200 or less than 100, the target second scene is an indoor scene, such as an office, a corridor, a conference room, a bedroom, etc.); if the brightness mean value of the brightness features is larger than 400 and smaller than 600, the brightness standard deviation is larger than or equal to 200, and the number of the high-brightness block maps is larger than or equal to 50, the target second scene is a backlight scene, such as a backlight window and a building hall; if the brightness mean value of the brightness features is less than 150, the brightness standard deviation is less than or equal to 100, and the number of low-brightness block maps is greater than or equal to 100, the target second scene is a low-light scene, such as an unlit room, a dim road.
403. Determining the target first scene and the target second scene as the target scene type.
And the image acquisition equipment determines the type of the target scene together with the determined first target scene and the determined second target scene.
For example, if the target first scene is determined to be a high color temperature scene and the target second scene is determined to be an outdoor scene, the target scene type is a high color temperature outdoor scene.
Based on fig. 4 and referring to fig. 5, the embodiment describes how to obtain the image optimization parameters corresponding to the target scene type, including:
501. if the target first scene of the target scene type is an outdoor scene and the target second scene of the target scene type is a high color temperature scene, acquiring code rate, sharpness, contrast and color saturation associated with the outdoor scene and acquiring white balance parameters associated with the high color temperature scene;
if the image acquisition equipment determines that the target first scene of the target scene type is an outdoor scene and the target second scene of the target scene type is a high color temperature scene, the image acquisition equipment acquires image optimization parameters such as code rate, sharpness, contrast and color saturation parameters associated with the outdoor scene, and the image acquisition equipment acquires image optimization parameters such as white balance parameters associated with the high color temperature scene.
502. If the target first scene of the target scene type is an indoor scene and the target second scene of the target scene type is a mixed color temperature scene, acquiring code rate, sharpness and denoising intensity parameters associated with the indoor scene and acquiring white balance parameters associated with the mixed color temperature scene.
If the image acquisition equipment determines that the target first scene of the target scene type is an indoor scene and the target second scene of the target scene type is a mixed color temperature scene, the image acquisition equipment acquires image optimization parameters such as code rate, sharpness and denoising intensity parameters associated with the indoor scene, and the image acquisition equipment acquires image optimization parameters such as white balance parameters associated with the mixed color temperature scene.
And after the image acquisition equipment acquires the image optimization parameters, optimizing a second image frame acquired under the current scene. For example, if the image acquisition device determines that the target first scene of the target scene type is an outdoor scene, the image acquisition device performs reduction processing on the code rate of the second image frame and performs enhancement processing on the sharpness, the contrast and the color saturation of the second image frame; if the image acquisition equipment determines that the target first scene of the target scene type is an indoor scene, the image acquisition equipment reduces the code rate of the second image frame and strengthens the sharpness and the denoising strength of the second image frame. If the image acquisition equipment determines that the target second scene of the target scene type is a high color temperature scene, the image acquisition equipment adjusts the white balance of the second image frame, reduces the white balance weight of the number of high color temperature falling points, and enables the white balance result to incline towards a low color temperature object; if the image acquisition equipment determines that the target second scene of the target scene type is the mixed color temperature scene, the image acquisition equipment adjusts the white balance of the second image frame, reduces the white balance weight of the high color temperature falling point number, and even if the white balance result inclines to the high color temperature object.
Referring to fig. 6, the present embodiment is a schematic structural diagram of an image capturing apparatus, including:
an obtaining unit 601, configured to obtain a first image frame acquired by the image acquisition device in a current scene;
a first determining unit 602 for determining image features of the first image frame;
a second determining unit 603, configured to determine a target scene type of the current scene according to the image feature;
an obtaining unit 604, configured to obtain an image optimization parameter corresponding to the target scene type;
an optimizing unit 605, configured to optimize, by using the image optimization parameter, a second image frame acquired by the image acquisition device in the current scene to obtain an optimized target image frame, where the first image frame is different from the second image frame.
In one implementation, the image feature includes a color feature, and the first determining unit includes:
a first segmentation subunit configured to segment the first image frame into a first segmentation map of m blocks;
a first statistical subunit, configured to perform statistics on an R value, a G value, and a B value of the first tile map;
a first determining subunit, configured to determine a cluster center value of the first image frame and a number of drop points associated with the cluster center value according to a clustering algorithm and an R value, a G value, and a B value of the first block map;
the second determining subunit is used for determining the color temperature value corresponding to the clustering center value;
a third determining subunit, configured to determine the color temperature value and the number of the drop points as the color feature of the first image frame.
In one implementation, if the image feature includes a luminance feature, the first determining unit includes:
a second dividing subunit, configured to divide the first image frame into n second block maps;
the second counting subunit is used for counting the brightness values of the second block map;
a third counting subunit, configured to count a luminance mean value and a luminance standard deviation of the first image frame according to the luminance values of the second block maps, determine a high-luminance block map and a low-luminance block map from the n second block maps according to a preset rule, and count the number of the high-luminance block maps and the number of the low-luminance block maps;
a fourth determining subunit, configured to determine the luminance mean, the luminance standard deviation, the number of high luminance block maps, and the number of low luminance block maps as the luminance features of the first image frame.
In one implementation, the second determining unit includes:
a fifth determining subunit, configured to determine, in a first preset relationship, a target first scene associated with the color feature, where the first preset relationship is a correspondence relationship between multiple first scenes and multiple color features;
a sixth determining subunit, configured to determine a target second scene associated with the brightness feature in a second preset relationship, where the second preset relationship is a correspondence between multiple target second scenes and multiple brightness features;
a seventh determining subunit, configured to determine the target first scene and the target second scene as the target scene type.
In one implementation, the image optimization parameters include any one or more of code rate, sharpness, contrast, color saturation, denoising strength, and white balance.
In one implementation, the obtaining unit includes:
a first obtaining subunit, configured to, if a target first scene of the target scene type is an outdoor scene and a target second scene of the target scene type is a high color temperature scene, obtain a code rate, a sharpness, a contrast, and a color saturation associated with the outdoor scene, and obtain a white balance parameter associated with the high color temperature scene;
a second obtaining subunit, configured to, if the target first scene of the target scene type is an indoor scene, and the target second scene of the target scene type is a mixed color temperature scene, obtain a code rate, a sharpness, and a denoising intensity associated with the indoor scene, and obtain a white balance parameter associated with the mixed color temperature scene.
Fig. 7 is another schematic structural diagram of an image capturing apparatus according to an embodiment of the present disclosure, where the image capturing apparatus 701 may include one or more Central Processing Units (CPUs) 702 and a memory 706, and one or more applications or data are stored in the memory 706.
The memory 706 may be volatile storage or persistent storage, among others. The program stored in the memory 706 may include one or more modules, each of which may include a sequence of instructions operating on a server. Still further, the central processor 702 may be configured to communicate with the memory 706 to execute a sequence of instruction operations in the memory 706 on the image capture device 701.
The image capture device 701 may also include one or more power supplies 703, one or more wired or wireless network interfaces 704, one or more input-output interfaces 705, and/or one or more operating systems, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
The image capturing device 701 may perform the operations in the embodiment shown in any one of fig. 1 to fig. 5, which are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.

Claims (8)

1. An image adjusting method is applied to an image acquisition device, and the method comprises the following steps:
acquiring a first image frame acquired by the image acquisition equipment in a current scene;
determining an image feature of the first image frame;
determining the target scene type of the current scene according to the image characteristics;
obtaining image optimization parameters corresponding to the target scene type;
optimizing a second image frame acquired by the image acquisition equipment under the current scene by using the image optimization parameters to obtain an optimized target image frame, wherein the first image frame is different from the second image frame;
the image feature comprises a color feature, the determining the image feature of the first image frame comprises:
segmenting the first image frame into a first block map of m blocks;
counting the R value, the G value and the B value of the first block diagram;
determining a cluster center value of the first image frame and a number of drop points associated with the cluster center value according to a clustering algorithm and an R value, a G value and a B value of the first block map;
determining a color temperature value corresponding to the clustering center value;
determining the color temperature value and the number of the falling points as color features of the first image frame;
the determining a cluster center value and a number of drop points associated with the cluster center value for the first image frame comprises:
after the image acquisition equipment acquires the R, G and B values of each first block diagram, determining the R/G and B/G values of each first block diagram, and then calculating the R/G and B/G values of m first block diagrams according to a K-means clustering algorithm to obtain N clustering centers, wherein N is more than or equal to 3; determining the number of the falling points corresponding to each clustering center, wherein the number of the falling points is the number of the first block diagram associated with each clustering center, and determining the weight of each clustering center through the number of the falling points; and determining 2 clustering centers with the maximum weight according to the respective weight of the N clustering centers from the N clustering centers, wherein the R/G and B/G values corresponding to the 2 clustering centers are the clustering center values.
2. The image adjustment method of claim 1, wherein the image feature comprises a luminance feature, and the determining the image feature of the first image frame comprises:
dividing the first image frame into a second block map of n blocks;
counting the brightness value of the second block map;
counting the brightness mean value and the brightness standard deviation of the first image frame according to the brightness value of the second block diagram, determining a high-brightness block diagram and a low-brightness block diagram from the n second block diagrams according to a preset rule, and counting the number of the high-brightness block diagrams and the number of the low-brightness block diagrams;
determining the luminance mean, the luminance standard deviation, the number of high luminance block maps, and the number of low luminance block maps as the luminance characteristics of the first image frame.
3. The image adjustment method of claim 2, the determining a target scene type of the current scene from the image features, comprising:
determining a target first scene associated with the color features in a first preset relation, wherein the first preset relation is a corresponding relation between a plurality of first scenes and a plurality of color features;
determining a target second scene associated with the brightness features in a second preset relationship, wherein the second preset relationship is a corresponding relationship between a plurality of target second scenes and a plurality of brightness features;
determining the target first scene and the target second scene as the target scene type.
4. The image adjustment method according to claim 3, wherein the image optimization parameters include any one or more of a code rate, a sharpness, a contrast, a color saturation, a denoising strength, and a white balance.
5. The image adjustment method according to claim 4, wherein the obtaining of the image optimization parameter corresponding to the target scene type includes:
if the target first scene of the target scene type is an outdoor scene and the target second scene of the target scene type is a high color temperature scene, acquiring code rate, sharpness, contrast and color saturation associated with the outdoor scene and acquiring white balance parameters associated with the high color temperature scene;
if the target first scene of the target scene type is an indoor scene and the target second scene of the target scene type is a mixed color temperature scene, acquiring code rate, sharpness and denoising intensity associated with the indoor scene and acquiring white balance parameters associated with the mixed color temperature scene.
6. An image acquisition apparatus, characterized by comprising:
the acquisition unit is used for acquiring a first image frame acquired by the image acquisition equipment in a current scene;
a first determining unit for determining an image feature of the first image frame;
the second determining unit is used for determining the target scene type of the current scene according to the image characteristics;
the obtaining unit is used for obtaining image optimization parameters corresponding to the target scene type;
the optimization unit is used for optimizing a second image frame acquired by the image acquisition equipment in the current scene by using the image optimization parameters to obtain an optimized target image frame, wherein the first image frame is different from the second image frame;
the image feature includes a color feature, the first determination unit includes:
a first segmentation subunit configured to segment the first image frame into a first block map of m blocks;
the first counting subunit is used for counting the R value, the G value and the B value of the first block diagram;
a first determining subunit, configured to determine, according to a clustering algorithm and an R value, a G value, and a B value of the first tile map, a cluster center value of the first image frame and a number of drop points associated with the cluster center value;
the second determining subunit is used for determining a color temperature value corresponding to the clustering center value;
a third determining subunit, configured to determine the color temperature value and the number of the falling points as color features of the first image frame;
the determining a cluster center value and a number of landings associated with the cluster center value for the first image frame comprises:
after the image acquisition equipment acquires the R, G and B values of each first block diagram, determining the R/G and B/G values of each first block diagram, and then calculating the R/G and B/G values of m first block diagrams according to a K-means clustering algorithm to obtain N clustering centers, wherein N is more than or equal to 3; determining the number of the falling points corresponding to each clustering center, wherein the number of the falling points is also the number of the first block diagram associated with each clustering center, and determining the weight of each clustering center according to the number of the falling points; and determining 2 clustering centers with the maximum weight according to the respective weight of the N clustering centers from the N clustering centers, wherein the R/G and B/G values corresponding to the 2 clustering centers are the clustering center values.
7. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 5.
8. An image acquisition apparatus, characterized by comprising:
the system comprises a central processing unit, a memory, an input/output interface, a wired or wireless network interface and a power supply;
the memory is a transient memory or a persistent memory;
the central processor is configured to communicate with the memory, and to invoke instruction operations in the memory on the image capture device to perform the method of any of claims 1 to 5.
CN202110372884.3A 2021-04-07 2021-04-07 Image adjusting method and related equipment thereof Active CN113079319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110372884.3A CN113079319B (en) 2021-04-07 2021-04-07 Image adjusting method and related equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110372884.3A CN113079319B (en) 2021-04-07 2021-04-07 Image adjusting method and related equipment thereof

Publications (2)

Publication Number Publication Date
CN113079319A CN113079319A (en) 2021-07-06
CN113079319B true CN113079319B (en) 2022-10-14

Family

ID=76615298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110372884.3A Active CN113079319B (en) 2021-04-07 2021-04-07 Image adjusting method and related equipment thereof

Country Status (1)

Country Link
CN (1) CN113079319B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111246621B (en) * 2020-01-10 2022-03-15 江门市征极光兆科技有限公司 WiFi technology-based neon light strip control system
CN113507643B (en) * 2021-07-09 2023-07-07 Oppo广东移动通信有限公司 Video processing method, device, terminal and storage medium
CN114125408A (en) * 2021-11-24 2022-03-01 Oppo广东移动通信有限公司 Image processing method and device, terminal and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108337433A (en) * 2018-03-19 2018-07-27 广东欧珀移动通信有限公司 A kind of photographic method, mobile terminal and computer readable storage medium
CN110969571A (en) * 2019-11-29 2020-04-07 福州大学 Method and system for specified self-adaptive illumination migration in camera-crossing scene

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008042617A (en) * 2006-08-08 2008-02-21 Eastman Kodak Co Digital camera
US9826149B2 (en) * 2015-03-27 2017-11-21 Intel Corporation Machine learning of real-time image capture parameters
JP6639113B2 (en) * 2015-06-05 2020-02-05 キヤノン株式会社 Image recognition device, image recognition method, and program
CN108156435B (en) * 2017-12-25 2020-03-13 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and computer device
CN110555443B (en) * 2018-06-01 2022-05-20 杭州海康威视数字技术股份有限公司 Color classification method, device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108337433A (en) * 2018-03-19 2018-07-27 广东欧珀移动通信有限公司 A kind of photographic method, mobile terminal and computer readable storage medium
CN110969571A (en) * 2019-11-29 2020-04-07 福州大学 Method and system for specified self-adaptive illumination migration in camera-crossing scene

Also Published As

Publication number Publication date
CN113079319A (en) 2021-07-06

Similar Documents

Publication Publication Date Title
CN113079319B (en) Image adjusting method and related equipment thereof
CN107680056B (en) Image processing method and device
CN108234971B (en) White balance parameter determines method, white balance adjustment method and device, storage medium, terminal
CN103297789B (en) White balance correcting method and white balance correcting device
CN109274985B (en) Video transcoding method and device, computer equipment and storage medium
CN108053381A (en) Dynamic tone mapping method, mobile terminal and computer readable storage medium
CN102867295B (en) A kind of color correction method for color image
CN107077830B (en) Screen brightness adjusting method suitable for unmanned aerial vehicle control end and unmanned aerial vehicle control end
CN111447372B (en) Control method, device, equipment and medium for brightness parameter adjustment
KR20170030933A (en) Image processing device and auto white balancing metohd thereof
WO2021082569A1 (en) Light compensation method for capturing picture, intelligent television and computer readable storage medium
US20140292616A1 (en) Computer monitor equalization using handheld device
CN112218065B (en) Image white balance method, system, terminal device and storage medium
WO2020119454A1 (en) Method and apparatus for color reproduction of image
CN112601063A (en) Mixed color temperature white balance method
CN109348207B (en) Color temperature adjusting method, image processing method and device, medium and electronic equipment
US9613294B2 (en) Control of computer vision pre-processing based on image matching using structural similarity
CN112492286B (en) Automatic white balance correction method, device and computer storage medium
CN116485679A (en) Low-illumination enhancement processing method, device, equipment and storage medium
CN116614716A (en) Image processing method, image processing device, storage medium, and electronic apparatus
CN115474316A (en) Method and device for controlling atmosphere lamp, electronic equipment and storage medium
CN115334250A (en) Image processing method and device and electronic equipment
CN107392860A (en) Image enchancing method and equipment, AR equipment
CN114698207A (en) Light adjusting method and device, electronic equipment and storage medium
CN114565874A (en) Method for identifying green plants in video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant