WO2023098743A1 - Automatic exposure method, apparatus and device, and storage medium - Google Patents
Automatic exposure method, apparatus and device, and storage medium Download PDFInfo
- Publication number
- WO2023098743A1 WO2023098743A1 PCT/CN2022/135546 CN2022135546W WO2023098743A1 WO 2023098743 A1 WO2023098743 A1 WO 2023098743A1 CN 2022135546 W CN2022135546 W CN 2022135546W WO 2023098743 A1 WO2023098743 A1 WO 2023098743A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- area
- preview image
- touch
- subject
- metering
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 94
- 238000003062 neural network model Methods 0.000 claims abstract description 42
- 238000003708 edge detection Methods 0.000 claims abstract description 11
- 238000005375 photometry Methods 0.000 claims description 65
- 238000012545 processing Methods 0.000 claims description 28
- 238000000605 extraction Methods 0.000 claims description 22
- 238000011176 pooling Methods 0.000 claims description 16
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 6
- 210000000746 body region Anatomy 0.000 claims 1
- 238000007781 pre-processing Methods 0.000 claims 1
- 238000012360 testing method Methods 0.000 abstract description 3
- 230000002708 enhancing effect Effects 0.000 abstract 1
- 238000013527 convolutional neural network Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 7
- 230000032683 aging Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004907 flux Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000010998 test method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Definitions
- the present disclosure relates to an automatic exposure method, device, equipment and storage medium.
- the camera function has been widely used in various electronic products such as mobile phones and computers. It can be used for video conferencing, telemedicine, real-time monitoring and image processing by taking photos or videos. Among them, in order to prevent insufficient or excessive brightness of the captured image, so that the image has a good exposure effect, it is particularly important to perform light metering and exposure control on the scene before capturing the image.
- an average photometry method, a center-weighted photometry method, and a spot photometry method can be used to meter a photographed picture.
- some special scenes such as scenes where there is a big difference between the brightness of the subject and the background
- the above-mentioned average photometry and center-weighted photometry cannot make the subject obtain a suitable exposure effect; and for the above-mentioned spot metering method
- the selection of metering points has high requirements. For ordinary users, if the appropriate metering point cannot be selected, it will easily lead to overexposure or underexposure of the shooting screen, which will affect the clarity of the photos taken.
- the shooting picture in a shooting scene where there is a large difference in light and shade between the subject and the background, the shooting picture may be overexposed or underexposed, which affects the clarity of the taken photo.
- an automatic exposure method, device, device, and storage medium are provided.
- a method of automatic exposure comprising:
- the neural network model is configured to perform edge detection and area division on the preview image
- the dynamic weight method is used to measure the light of the main object, and the brightness information of the preview image is obtained, so as to perform exposure according to the brightness information.
- the preview image is input into a pre-trained neural network model to extract the subject area, including:
- the candidate area is input into a pre-trained neural network model for area division processing, so as to extract the subject area.
- determining the subject object based on the touch metering area and the subject area includes:
- the area corresponding to the matching position is determined as the subject object.
- the dynamic weight method is used to measure the light of the main object, and the brightness information of the preview image is obtained, including:
- the subject area is segmented, and a Gaussian distributed weight table is established centering on the touch metering area, including:
- respectively assigning photometry weight values to the touch photometry area and the associated photometry area includes:
- exposing according to the brightness information includes:
- An automatic exposure device comprising:
- an acquisition module configured to acquire a preview image
- the area extraction module is configured to input the preview image into a pre-trained neural network model to extract the subject area, and the neural network model is configured to perform edge detection and area division on the preview image;
- An area determination module configured to determine the touch metering area of the preview image when a screen touch operation is detected
- a subject determination module configured to determine a subject object based on the touch metering area and the subject area
- the light metering module is configured to use a dynamic weighting method to measure light on the subject object to obtain brightness information of the preview image, so as to perform exposure according to the brightness information.
- the region extraction module includes: a first extraction unit and a second extraction unit; the first extraction unit is configured to divide the preview image according to brightness and extract candidate regions; the second The second extraction unit is configured to input the candidate area into a pre-trained neural network model for area division processing, so as to extract the subject area.
- the subject determination module includes: an acquisition unit, a first determination unit, and a second determination unit; the acquisition unit is configured to respectively acquire the first corresponding touch metering area in the preview image A coordinate position and a second coordinate position corresponding to the subject area in the preview image; the first determining unit is configured to determine an area corresponding to a position where the first coordinate position matches the second coordinate position; the first A determining unit configured to determine the area corresponding to the matching position as the subject object.
- the photometry module includes: an establishment unit and a third determination unit; the establishment unit is configured to perform segmentation processing on the subject area where the subject object is located, and establish a Gaussian distribution centered on the touch photometry area.
- Weight table the third determination unit is configured to determine brightness information of the preview image based on the Gaussian distributed weight table.
- the establishing unit is specifically configured to determine the associated photometric area of the touch photometric area, and the associated photometric area is an area other than the touch photometric area in the subject area; centered on the touch photometric area , respectively assign photometry weight values to the touch photometry area and the associated photometry area, so as to establish a Gaussian distributed weight table.
- the establishment unit is further configured to assign the highest photometric weight value to the touch metering area; with the touch metering area as the center, according to the distance between the associated photometry area and the touch metering area from near to far According to the rules, the corresponding metering weight values are assigned to the associated metering areas in descending order.
- the light metering module is specifically configured to calculate exposure time and exposure gain by using a preset exposure control algorithm based on brightness information; when a shutter trigger operation is detected, perform exposure based on the exposure time and exposure gain.
- a computer device including a memory and one or more processors, the memory stores computer-readable instructions, and the one or more processors execute the computer-readable instructions to implement the method provided by any embodiment of the present disclosure. Steps of the automatic exposure method.
- One or more non-transitory computer-readable storage media having stored thereon computer-readable instructions that, when executed by one or more processors, implement any implementation of the present disclosure
- the steps of the automatic exposure method provided by the example The steps of the automatic exposure method provided by the example.
- FIG. 1 is an application scene diagram of an automatic exposure method provided by one or more embodiments of the present disclosure
- FIG. 2 is a schematic flowchart of an automatic exposure method provided by one or more embodiments of the present disclosure
- FIG. 3 is a schematic structural diagram of a convolutional neural network model provided by one or more embodiments of the present disclosure
- FIG. 4 is a schematic flowchart of a method for determining brightness information of a preview image provided by one or more embodiments of the present disclosure
- FIG. 5 is a schematic diagram of establishing a Gaussian distributed weight table centered on a touch metering area provided by one or more embodiments of the present disclosure
- FIG. 6 is a schematic diagram of a Gaussian distributed weight table provided by one or more embodiments of the present disclosure.
- FIG. 7 is a schematic flowchart of an automatic exposure method provided by one or more embodiments of the present disclosure.
- FIG. 8 is a schematic structural diagram of an automatic exposure device provided by one or more embodiments of the present disclosure.
- Fig. 9 is a schematic structural diagram of a computer device provided by one or more embodiments of the present disclosure.
- Automatic Exposure refers to the camera automatically adjusts the exposure according to the intensity of light to prevent overexposure or underexposure.
- the purpose of auto exposure is to achieve an appreciative brightness level or so-called target brightness level in different lighting conditions and scenes, so that the captured video or image is neither too dark nor too bulky, in order to achieve this, the lens aperture, sensor Exposure time, sensor analog gain and sensor/ISP digital gain, the process is called auto exposure.
- Convolutional Neural Network (Convolutional Neural Network, referred to as CNN) is a feed-forward neural network that includes convolutional calculations and has a deep structure. It is one of the representative algorithms for deep learning. It consists of product layers and fully connected layers, as well as associated weights and pooling layers.
- Feature extraction refers to the method and process of using a computer to extract characteristic information in an image.
- feature extraction starts from an initial set of measurement data and builds derived values (features) that are intended to be informative and non-redundant, thereby facilitating subsequent learning and generalization steps.
- an average photometry method, a center-weighted photometry method, and a spot photometry method can be used to meter a photographed picture.
- the average light metering method refers to dividing the picture into multiple areas, each area is independently metered, and then calculates the average light metering value of the entire picture in turn;
- the center weighted light metering method means that the light metering mode focuses on the center of the picture. The central area is metered, and then the entire scene is averaged.
- the spot metering method refers to the metering of a point, which usually refers to the center of the entire screen, but the above-mentioned spot metering method has high requirements for the selection of the metering point. For ordinary users, if the appropriate metering point cannot be selected Spots of light will cause overexposure or underexposure of the shooting screen, which will affect the clarity of the photos.
- the present disclosure provides an automatic exposure method, device, equipment and storage medium.
- this method can accurately extract the subject area through the neural network model, and determine the The main object improves the accuracy of determining the main object, and by using the dynamic weight method to measure the main object, it ensures the appropriate exposure of the main object in the scene with a large difference between the main object and the background light and dark, thereby avoiding The situation of overexposure or underexposure of the shooting picture is generated, and the clarity of photo shooting is improved.
- FIG. 1 is an application scene diagram of an automatic exposure method in an embodiment.
- the application environment includes a terminal device 100.
- the terminal device 100 may be a terminal device with an image acquisition function.
- the terminal devices include but are not limited to various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices.
- the above-mentioned terminal device 100 is configured to obtain a preview image, input the preview image into a pre-trained neural network model, and extract the subject area; when a screen touch operation is detected, determine the touch metering area of the preview image; and the main area to determine the main object; the dynamic weight method is used to measure the main object to obtain the brightness information of the preview image, so as to perform exposure according to the brightness information.
- the above-mentioned way of obtaining the preview image by the terminal device may include, but not limited to, photosensitive acquisition through a charge-coupled device (Charge-coupled Device, referred to as CCD) photosensitive element or a CMOS (English full name is Complementary Metal-Oxide Semiconductor) photosensitive element Photosensitive acquisition.
- CCD Charge-coupled Device
- CMOS Complementary Metal-Oxide Semiconductor
- FIG. 2 is a schematic flowchart of an automatic exposure method provided by an embodiment of the present disclosure. The method is applied to a terminal device. As shown in FIG. 2 , the method includes:
- the above preview image refers to the image of the object to be photographed displayed in the image preview area of the terminal device before exposure, for example, when photographing a person or landscape, after the terminal device runs the camera function, it will An image of the task is formed in a certain area of the shooting interface for the user's reference. At this time, the area where the image is displayed in the shooting interface is the image preview area, and the displayed person or landscape image is the preview image.
- the above-mentioned object to be photographed may be a person, a landscape, an animal, an object, etc., and the object may be a house or a car, for example.
- the terminal device can receive the trigger instruction input by the user, and start the corresponding shooting application program on the terminal device according to the trigger instruction.
- An image preview area is formed, and a preview image of the object to be photographed is formed in the image preview area.
- the smart phone when a user uses a smart phone to take pictures of a certain scenery, the smart phone first receives an order to open the "Camera” application, and then the smart phone automatically opens the "Camera” application, and then the smart phone opens the "Camera” application on the screen.
- An image preview area is formed, and the camera is called to shoot the object to be photographed, so as to form a preview image of the object to be photographed in the image preview area, so that the terminal device obtains the preview image.
- the preview image can be divided and processed according to the brightness to extract candidate areas, and the candidate areas can be input into the pre-trained neural network model for area division processing to extract the main body area.
- the aforementioned neural network model may be a convolutional neural network model.
- a preset convolutional neural network model can be used to perform brightness division processing to extract candidate regions.
- the model parameters are obtained after continuous training. It is also possible to obtain the brightness value of each pixel in the preview image, and perform brightness division processing on the preview image according to the brightness value to extract candidate regions.
- the above convolutional neural network model may at least include a convolutional layer, a pooling layer and a fully connected layer.
- the convolution layer is to extract the local features in the picture through the filtering of the convolution kernel; the role of the pooling layer is to downsample, reduce dimensionality, remove redundant information, compress features, simplify network complexity, and reduce calculations. amount, small memory consumption, etc., processing through the pooling layer can effectively reduce the size of the parameter matrix, thereby reducing the number of parameters in the last fully connected layer, which can speed up the calculation and prevent overfitting; the fully connected layer mainly It is used for classification to output corresponding results.
- the candidate area is input into the pre-trained neural network model for area division processing, and the candidate area can be sequentially processed through the convolutional layer, the pooling layer, and the fully connected layer. The processing is divided so that the subject area is determined.
- the candidate area can be preprocessed first to obtain the preprocessed candidate area, and then input to the convolutional layer for feature extraction processing to obtain the output result of the convolutional layer, and the output result of the convolutional layer is processed After nonlinear mapping processing, input the pooling layer for down-sampling to obtain the output result of the pooling layer, and input the output result of the pooling layer into the fully connected layer for processing, thereby extracting the main body area.
- the body area may include body outline and body size.
- average pooling or maximum pooling may be used for processing.
- the average pooling refers to calculating the average value of the image area as the pooled value of the area.
- Max pooling refers to selecting the maximum value of the image area as the pooled value of the area.
- the above convolutional neural network can be trained through the following steps: first initialize the weight of the network to build the initial convolutional neural network model, obtain the historical image and the divided main body area, and then input the historical image into the convolutional layer , pooling layer and fully connected layer to obtain the output value of the main area, and calculate the error between the output value of the main area and the target value of the divided main area to obtain the loss function, and minimize the pair according to the loss function
- the all-straight parameters in the network model are optimized and updated to obtain a trained convolutional neural network.
- the convolutional neural network is used to extract the main body area, which can effectively reduce the dimensionality of large data volume pictures into small data volumes, and effectively preserve image features, and can avoid image points in the background area from being distorted in the main area during photometry.
- the image points in the subject area are unnecessarily affected, thereby greatly improving the accuracy of metering the image points in the subject area.
- the user can click on the preview image on the screen of the terminal device, so that the terminal device detects a touch operation on the screen.
- the touch screen of the terminal device includes a series of sensors, it can detect the capacitance change caused by the finger, When the user's finger touches the screen, it will affect the self-capacitance of each sensor and the mutual capacitance between them. Therefore, the touch metering area can be detected by detecting the change of capacitance, and the area where the capacitance changes can be determined as the area of the preview image. Touch the metering area.
- the subject object to be photographed can be determined according to a preset algorithm, and the corresponding first coordinate position of the touch metering area in the preview image and the main body area in the preview image are obtained respectively.
- the corresponding second coordinate position in the image and then determine the area corresponding to the matching position between the first coordinate position and the second coordinate position, and determine the area corresponding to the matching position as the main object.
- the subject object in this embodiment refers to the object that the terminal device needs to focus on during exposure.
- the subject object is the object with the largest brightness weight value in the algorithm.
- the main object in this embodiment can be a specific object, such as a person's face, human body, or an object, or a certain area in the preview image, such as an area of a certain scenery, a person and its surroundings area etc.
- the user opens the camera application and forms a preview image in the image preview area, and the preview image can be divided into regions through a pre-trained convolutional neural network model , so as to obtain the subject area, and then after the user performs a click operation in the preview image, the smart phone will determine the touch metering area in the preview image according to the position clicked by the user, and a marquee can be formed on the screen interface.
- the area corresponding to the position where the frame matches the subject area is determined as the subject object.
- light metering may be performed on the main object according to the light reflected by the main object, so as to obtain brightness information of the preview image.
- FIG. 4 is a schematic flowchart of a method for determining brightness information of a preview image provided by an embodiment of the present disclosure. As shown in FIG. 4 , the method includes:
- S201 Perform segmentation processing on the subject area where the subject object is located, and establish a Gaussian distributed weight table centered on the touch metering area.
- the main body area where the main object is located may be segmented, for example, an average segmentation method may be used to determine the associated photometric area of the touch photometric area, and the associated photometric area is the part of the main body area other than the touch photometric area. For other areas, with the touch photometry area as the center, assign photometry weight values to the touch photometry area and the associated photometry area respectively, so as to establish a Gaussian distributed weight table.
- Figure 5 is a schematic diagram of establishing a Gaussian distributed weight table centered on the touch metering area, please refer to Figure 5, the above Figure 5 includes the acquired preview image, the main body area extracted after processing through the convolutional neural network model, The detected touch photometric area and the associated photometric area, the associated photometric area is other areas in the main body area except the touch photometric area, and then the main body area where the main object is located is segmented, with the touch photometric area as The center establishes a Gaussian distributed weight table.
- the photometry weight value of the touch photometer area is higher than that of other photometry areas.
- the metering weight value of the associated metering area is high.
- the photometry weight value of the touch photometry area is higher than the photometry weight value of the associated photometry area.
- FIG. 6 is a schematic diagram of the established Gaussian distributed weight table provided by the embodiment of the present disclosure. Please refer to FIG. As the center, according to the rule that the distance between the associated photometry area and the touch photometry area is from shortest to farthest, the corresponding photometry weight values are assigned to the associated photometry areas in descending order. For example, the photometric weight value of the brightness information assigned to the touch photometric area is 100%, and the photometric weight values of the brightness information of the associated photometric areas from the shortest to the farthest from the touch photometric area are 90% in turn. , 80%, 60%, 40%, 20%.
- the brightness values of the touch metering area and the associated photometering area can be obtained through photometry, and the photometry can be performed by internal photometry or external photometry.
- the optical path of the light metering element and the lens is independent, and this light metering method is widely used in head-up viewfinder lens shutter cameras; the internal light metering method is to measure light through the lens.
- the Gaussian distributed weight table and the luminance values can be weighted and summed to obtain the luminance information of the preview image.
- the brightness information may be a brightness value.
- the luminance value obtained after touching the photometry area for photometry is X, and there may be five associated photometry areas.
- the values are Y, Z, H, G, K in turn, and the light metering weight value of the brightness information of the touch light metering area is 100%, and the brightness of the associated light metering area with the distance from the touch light metering area from near to far
- the metering weight values of the information are 90%, 80%, 60%, 40%, and 20%, then the final brightness information of the entire preview screen is 100%*X+90%*Y+80%*Z+60%* H+40%*G+20%*K.
- exposure can be performed according to the brightness information.
- exposure is a physical quantity used to calculate the luminous flux from the scene to the camera.
- the image sensor can only get high-quality photos if it gets the correct exposure. If it is overexposed, the image will look too bright; if it is underexposed, the image will look too dark.
- the size of the luminous flux reaching the sensor is mainly determined by two factors: the length of the exposure time and the size of the aperture.
- the terminal device when it obtains the brightness information of the preview image, it can use the brightness information as the photometry result, and use the preset exposure control algorithm to calculate the exposure time and exposure gain based on the brightness information.
- the preset exposure control algorithm may be an AE algorithm.
- the brightness of the image can be adjusted by adjusting the camera aperture size or shutter speed while keeping the sensitivity (International Standardization Organization, ISO) constant, so as to perform exposure It is controlled and processed by the ISP image sensor, so that the photos after exposure adjustment are displayed on the terminal device.
- ISO International Standardization Organization
- the above AE algorithm may include three steps, which are: first, perform brightness statistics on the brightness set by the current sensor parameter; second, analyze and guess the current brightness to determine the appropriate brightness; third, change the exposure setting , to cycle through the first few steps to maintain the brightness of the exposure.
- the subject area is accurately determined by combining the convolutional neural network model, which can not only meet the requirements of special scenes to obtain accurate exposure of the subject object, but also perform corresponding compensation exposure for different subject objects, thereby obtaining Better shooting experience and shooting effect.
- the terminal device when it is necessary to take a picture of a person or scenery, the terminal device runs the camera function, collects image information through the camera module, and displays it on the screen of the terminal device Form an image preview area, and form a preview image of the object to be photographed in the image preview area.
- the preview image can be divided and processed according to the brightness, and the candidate area can be extracted, and the candidate area can be input into the pre-trained CNN network model. Segmentation processing is performed to extract a subject area, which may include a subject outline and a subject size.
- the terminal device detects the user's screen touch operation, determines the touch metering area of the preview image, and then uses the background algorithm to guess the main object to be photographed based on the touch metering area, which can be extracted through the CNN network model
- the second coordinate position corresponding to the subject area and the first coordinate position corresponding to the light metering area are touched to determine an area corresponding to the matching position, and determine the area corresponding to the matching position as the main object.
- use the dynamic weighting method to measure the light of the main object to obtain the brightness information of the preview image.
- the terminal device can segment the main area where the main object is located, determine the associated light metering area of the touch light metering area, and then use the touch light metering
- the region is the center, and the photometry weight values are respectively assigned to the touch metering region and the associated metering region to establish a Gaussian distributed weight table.
- the preset exposure control algorithm AE algorithm is used to adjust the exposure control, and the ISP image sensor is used for processing, so that the terminal device displays the photos after exposure adjustment.
- the automatic exposure method obtained by the embodiments of the present disclosure obtains a preview image and inputs the preview image into a pre-trained neural network model to extract the subject area.
- the neural network model is configured to perform edge detection and area division on the preview image.
- determine the touch metering area of the preview image determine the main object based on the touch metering area and the main area, use the dynamic weighting method to measure the main object, and obtain the brightness information of the preview image, and use it according to the brightness Information is exposed.
- This method can accurately extract the subject area through the neural network model, and determine the subject object in combination with the touch metering area, which improves the accuracy of determining the subject object. In order to properly expose the main object in a scene with a large difference between the light and shade of the main body and the background, thereby avoiding the situation of over-exposure or under-exposure of the shooting picture, and improving the clarity of photo shooting.
- steps in the flow charts of FIGS. 2-5 are shown sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in Figures 2-5 may include a plurality of sub-steps or stages, these sub-steps or stages are not necessarily executed at the same time, but may be executed at different times, these sub-steps or stages The order of execution is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
- FIG. 8 is a schematic structural diagram of an automatic exposure device provided by an embodiment of the present disclosure.
- the device may be a device in a terminal device.
- the device 600 includes:
- An acquisition module 610 configured to acquire a preview image
- the area extraction module 620 is configured to input the preview image into a pre-trained neural network model to extract the subject area, and the neural network model is configured to perform edge detection and area division on the preview image;
- the area determination module 630 is configured to determine the touch metering area of the preview image when a screen touch operation is detected;
- the subject determination module 640 is configured to determine the subject object based on the touch metering area and the subject area;
- the light metering module 650 is configured to use a dynamic weighting method to measure the light of the main object to obtain brightness information of the preview image, so as to perform exposure according to the brightness information.
- the above region extraction module 620 includes:
- the first extraction unit 621 is configured to divide and process the preview image according to brightness, and extract candidate regions;
- the second extraction unit 622 is configured to input the candidate area into a pre-trained neural network model for area division processing, so as to extract the main body area.
- the subject determination module 640 includes:
- the acquiring unit 641 is configured to respectively acquire the first coordinate position corresponding to the touch metering area in the preview image and the second coordinate position corresponding to the main body area in the preview image;
- the first determining unit 642 is configured to determine an area corresponding to a position where the first coordinate position matches the second coordinate position;
- the second determining unit 643 is configured to determine the area corresponding to the matching position as the subject object.
- the light metering module 650 includes:
- the establishment unit 651 is configured to perform segmentation processing on the subject area where the subject object is located, and establish a Gaussian distributed weight table centered on the touch metering area;
- the third determining unit 652 is configured to determine brightness information of the preview image based on the Gaussian distributed weight table.
- the above-mentioned establishing unit 651 is specifically configured as:
- the above-mentioned establishment unit 651 is further configured to:
- the above photometry module 650 is specifically configured as:
- the preset exposure control algorithm is used to calculate the exposure time and exposure gain
- the automatic exposure device obtains the preview image through the acquisition module, and inputs the preview image into the pre-trained neural network model through the area extraction module to extract the subject area, and then when the screen touch operation is detected, through the area
- the determination module determines the touch metering area of the preview image, and determines the subject object through the subject determination module based on the touch metering area and the subject area, and then uses the dynamic weight method to measure the subject object through the photometry module to obtain the brightness of the preview image information to make exposure based on brightness information.
- This method can accurately extract the subject area through the neural network model, and determine the subject object in combination with the touch metering area, which improves the accuracy of determining the subject object. In order to properly expose the main object in a scene with a large difference between the light and shade of the main body and the background, thereby avoiding the situation of over-exposure or under-exposure of the shooting picture, and improving the clarity of photo shooting.
- Each module in the above-mentioned terminal aging test device can be fully or partially realized by software, hardware and a combination thereof.
- the above-mentioned modules can be embedded in or independent of one or more processors in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that one or more processors can call and execute the above The operation corresponding to the module.
- a computer device is provided.
- the computer device may be a terminal, and its internal structure may be as shown in FIG. 9 .
- the computer device includes one or more processors, memory, communication interface, display screen, and input device connected by a system bus.
- the one or more processors of the computer device are configured to provide computing and control capabilities.
- the memory of the computer device includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium stores an operating system and computer readable instructions.
- the internal memory provides an environment for the execution of the operating system and computer readable instructions in the non-volatile storage medium.
- the communication interface of the computer device is configured to communicate with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, an operator network, near field communication (NFC) or other technologies.
- WIFI wireless fidelity
- NFC near field communication
- the computer-readable instructions are executed by one or more processors, a terminal aging test method is realized.
- the display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen
- the input device of the computer device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the casing of the computer device , and can also be an external keyboard, touchpad, or mouse.
- FIG. 9 is only a block diagram of a partial structure related to the disclosed solution, and does not constitute a limitation on the computer equipment to which the disclosed solution is applied.
- the specific computer equipment can be More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.
- the automatic exposure device provided by the present disclosure can be implemented in the form of computer readable instructions, and the computer readable instructions can be run on the computer device as shown in FIG. 9 .
- Various program modules constituting the terminal aging test device can be stored in the memory of the computer equipment, for example, the acquisition module, region extraction module, region determination module, subject determination module and light metering module shown in FIG. 9 .
- the computer-readable instructions constituted by each program module cause one or more processors to execute the steps in the automatic exposure method of each embodiment of the present disclosure described in this specification.
- the computer equipment shown in FIG. 8 may execute the step of: acquiring a preview image through the acquiring module in the automatic exposure device as shown in FIG. 6 .
- the computer equipment can perform the step of: inputting the preview image into a pre-trained neural network model to extract the main body area through the area extraction module.
- the computer device may use the area determination module to perform the step of: determining the touch metering area of the preview image when a screen touch operation is detected.
- the computer device may perform the step of determining a subject object based on the touch metering area and the subject area through the subject determination module.
- the computer device may perform the step of: using a dynamic weighting method to measure the subject object through the light metering module to obtain brightness information of the preview image, so as to perform exposure according to the brightness information.
- a computer device including a memory and one or more processors, the memory stores computer-readable instructions, and the one or more processors execute the computer-readable instructions to implement the following steps:
- Acquiring a preview image inputting the preview image into a pre-trained neural network model, extracting the main body area, and the neural network model is configured to perform edge detection and area division on the preview image; when a screen touch operation is detected, determining the touch metering area of the preview image; determining a subject object based on the touch metering area and the subject area; performing photometry on the subject object using a dynamic weighting method to obtain brightness information of the preview image, to perform exposure according to the brightness information.
- one or more non-transitory computer-readable storage media having computer-readable instructions stored thereon, the computer-readable instructions being executed by one or more processors When performing the following steps:
- Acquiring a preview image inputting the preview image into a pre-trained neural network model, extracting the main body area, and the neural network model is configured to perform edge detection and area division on the preview image; when a screen touch operation is detected, determining the touch metering area of the preview image; determining a subject object based on the touch metering area and the subject area; performing photometry on the subject object using a dynamic weighting method to obtain brightness information of the preview image, to perform exposure according to the brightness information.
- the computer-readable instructions obtain the preview image and input the preview image into a pre-trained neural network model to extract the main body area, and the neural network model is configured to perform edge detection and region detection on the preview image.
- This method can accurately extract the subject area through the neural network model, and determine the subject object in combination with the touch metering area, which improves the accuracy of determining the subject object.
- In order to properly expose the main object in a scene with a large difference between the light and shade of the main body and the background thereby avoiding the situation of over-exposure or under-exposure of the shooting picture, and improving the clarity of photo shooting.
- Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
- Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory.
- RAM Random Access Memory
- SRAM Static Random Access Memory
- DRAM Dynamic Random Access Memory
- the automatic exposure method provided by the present disclosure can accurately extract the subject area through the neural network model, and determine the subject object in combination with the touch metering area, which improves the accuracy of subject object determination, and uses the dynamic weight method to determine the subject object.
- Metering statistics can ensure proper exposure of the subject in scenes with large differences between the subject and the background, so as to avoid overexposure or underexposure of the shooting picture and improve the clarity of photo shooting. It has a strong industrial practicality.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
Disclosed are an automatic exposure method, apparatus and device, and a storage medium. The method comprises: acquiring a test script; acquiring a preview image; inputting the preview image into a pre-trained neural network model, and extracting a main region, the neural network model being configured to perform edge detection and region division on the preview image; when a screen touch operation is detected, determining a touch metering region of the preview image; on the basis of the touch metering region and the main region, determining a main subject; and performing metering on the main subject by means of a dynamic weighting method to obtain brightness information of the preview image so as to perform exposure according to the brightness information. The solution improves the accuracy of determining a main subject, and performs metering on the main subject by means of a dynamic weighting method, thus ensuring proper exposure for the main subject in a scene having a large brightness difference between the subject and a background, preventing photos from being overexposed or underexposed, and enhancing the sharpness of photos.
Description
本公开要求于2021年11月30日提交中国专利局、申请号为202111447995.2、发明名称为“自动曝光方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。This disclosure claims the priority of the Chinese patent application with the application number 202111447995.2 and the title of the invention "automatic exposure method, device, equipment and storage medium" filed with the China Patent Office on November 30, 2021, the entire contents of which are hereby incorporated by reference In this disclosure.
本公开涉及一种自动曝光方法、装置、设备及存储介质。The present disclosure relates to an automatic exposure method, device, equipment and storage medium.
随着科技的快速发展,相机功能已经广泛地应用在手机、电脑等各种电子产品中,其通过拍摄照片或视频可用于进行视频会议、远程医疗、实时监控及影像处理等。其中,为了防止拍摄的图像的亮度不足或者亮度过高,使得图片有良好的曝光效果,在拍摄图片前,对场景进行测光和曝光控制显得尤为重要。With the rapid development of science and technology, the camera function has been widely used in various electronic products such as mobile phones and computers. It can be used for video conferencing, telemedicine, real-time monitoring and image processing by taking photos or videos. Among them, in order to prevent insufficient or excessive brightness of the captured image, so that the image has a good exposure effect, it is particularly important to perform light metering and exposure control on the scene before capturing the image.
目前,相关技术中可以采用平均测光法、中心权重测光法以及点测光法对拍摄画面进行测光。然而,对于一些特殊场景,例如需要拍摄的主体和背景的亮暗差别大的场景,上述平均测光法和中心权重测光法不能使拍摄主体获得合适的曝光效果;而对于上述点测光法对测光点的选择要求较高,对于普通用户而言,如果不能选择合适的测光点,会导致拍摄画面容易出现过曝或欠曝的情况,从而影响拍摄照片的清晰度。At present, in the related art, an average photometry method, a center-weighted photometry method, and a spot photometry method can be used to meter a photographed picture. However, for some special scenes, such as scenes where there is a big difference between the brightness of the subject and the background, the above-mentioned average photometry and center-weighted photometry cannot make the subject obtain a suitable exposure effect; and for the above-mentioned spot metering method The selection of metering points has high requirements. For ordinary users, if the appropriate metering point cannot be selected, it will easily lead to overexposure or underexposure of the shooting screen, which will affect the clarity of the photos taken.
发明内容Contents of the invention
(一)要解决的技术问题(1) Technical problems to be solved
相关技术中,主体与背景明暗差异较大的拍摄场景下可能产生拍摄画面过曝或欠曝的情况,影响拍摄照片的清晰度。In related technologies, in a shooting scene where there is a large difference in light and shade between the subject and the background, the shooting picture may be overexposed or underexposed, which affects the clarity of the taken photo.
(二)技术方案(2) Technical solutions
根据本公开公开的各种实施例,提供一种自动曝光方法、装置、设备及存储介质。According to various embodiments of the present disclosure, an automatic exposure method, device, device, and storage medium are provided.
一种自动曝光方法,该方法包括:A method of automatic exposure, the method comprising:
获取预览图像;Get the preview image;
将所述预览图像输入预先训练好的神经网络模型中,提取主体区域,所述神经网络模型配置成对所述预览图像进行边缘检测和区域划分;Inputting the preview image into a pre-trained neural network model to extract the subject area, the neural network model is configured to perform edge detection and area division on the preview image;
当检测到屏幕触摸操作时,确定所述预览图像的触摸测光区域;When a screen touch operation is detected, determine the touch metering area of the preview image;
基于所述触摸测光区域和所述主体区域,确定主体对象;determining a subject object based on the touch metering area and the subject area;
采用动态权重法对所述主体对象进行测光,得到所述预览图像的亮度信息,以根据所述亮度信息进行曝光。The dynamic weight method is used to measure the light of the main object, and the brightness information of the preview image is obtained, so as to perform exposure according to the brightness information.
在一个实施例中,将所述预览图像输入预先训练好的神经网络模型中,提取主体区域,包括:In one embodiment, the preview image is input into a pre-trained neural network model to extract the subject area, including:
对所述预览图像按照亮度划分处理,提取候选区域;Divide and process the preview image according to brightness, and extract candidate regions;
将所述候选区域输入预先训练好的神经网络模型中进行区域划分处理,以提取主体区域。The candidate area is input into a pre-trained neural network model for area division processing, so as to extract the subject area.
在一个实施例中,基于所述触摸测光区域和所述主体区域,确定主体对象,包括:In one embodiment, determining the subject object based on the touch metering area and the subject area includes:
分别获取所述触摸测光区域在所述预览图像中对应的第一坐标位置和所述主体区域在所述预览图像中对应的第二坐标位置;Respectively acquiring a first coordinate position corresponding to the touch metering area in the preview image and a second coordinate position corresponding to the subject area in the preview image;
确定所述第一坐标位置与所述第二坐标位置相匹配位置对应的区域;determining the area corresponding to the matching position between the first coordinate position and the second coordinate position;
将所述匹配位置对应的区域确定为主体对象。The area corresponding to the matching position is determined as the subject object.
在一个实施例中,采用动态权重法对主体对象进行测光,得到所述预览图像的亮度信息,包括:In one embodiment, the dynamic weight method is used to measure the light of the main object, and the brightness information of the preview image is obtained, including:
对所述主体对象所在的主体区域进行分割处理,以所述触摸测光区域为中心建立高斯分布式权重表;Segmenting the main body area where the main body object is located, and establishing a Gaussian distributed weight table centered on the touch metering area;
基于所述高斯分布式权重表,确定所述预览图像的亮度信息。Determine brightness information of the preview image based on the Gaussian distributed weight table.
在一个实施例中,对所述主体区域进行分割处理,以所述触摸测光区域为中心建立高斯分布式权重表,包括:In one embodiment, the subject area is segmented, and a Gaussian distributed weight table is established centering on the touch metering area, including:
确定所述触摸测光区域的关联测光区域,所述关联测光区域为所述主体区域中除所述触摸测光区域外的其他区域;determining an associated photometric area of the touch photometric area, where the associated photometric area is an area other than the touch photometric area in the subject area;
以所述触摸测光区域为中心,分别为所述触摸测光区域和所述关联测光区域分配测光权重值,以建立所述高斯分布式权重表。Taking the touch photometry area as the center, assign photometry weight values to the touch photometry area and the associated photometry area respectively, so as to establish the Gaussian distributed weight table.
在一个实施例中,分别为所述触摸测光区域和所述关联测光区域分配测光权重值,包括:In one embodiment, respectively assigning photometry weight values to the touch photometry area and the associated photometry area includes:
为所述触摸测光区域分配最高测光权重值;Assigning the highest photometric weight value to the touch metering area;
以所述触摸测光区域为中心,按照所述关联测光区域与所述触摸测光区域的距离由近至远的规则,依次为所述关联测光区域由高至低分配对应的测光权重值。Taking the touch metering area as the center, according to the rule that the distance between the associated photometering area and the touch metering area is from near to far, sequentially assign corresponding metering values to the associated photometering area from high to low Weights.
在一个实施例中,根据所述亮度信息进行曝光,包括:In one embodiment, exposing according to the brightness information includes:
基于所述亮度信息,采用预设的曝光控制算法计算曝光时间和曝光增益;Based on the brightness information, using a preset exposure control algorithm to calculate exposure time and exposure gain;
当检测到快门触发操作时,基于所述曝光时间和所述曝光增益进行曝光。When a shutter trigger operation is detected, exposure is performed based on the exposure time and the exposure gain.
一种自动曝光装置,该装置包括:An automatic exposure device, the device comprising:
获取模块,配置成获取预览图像;an acquisition module configured to acquire a preview image;
区域提取模块,配置成将所述预览图像输入预先训练好的神经网 络模型中,提取主体区域,所述神经网络模型配置成对所述预览图像进行边缘检测和区域划分;The area extraction module is configured to input the preview image into a pre-trained neural network model to extract the subject area, and the neural network model is configured to perform edge detection and area division on the preview image;
区域确定模块,配置成当检测到屏幕触摸操作时,确定所述预览图像的触摸测光区域;An area determination module configured to determine the touch metering area of the preview image when a screen touch operation is detected;
主体确定模块,配置成基于所述触摸测光区域和所述主体区域,确定主体对象;a subject determination module configured to determine a subject object based on the touch metering area and the subject area;
测光模块,配置成采用动态权重法对所述主体对象进行测光,得到所述预览图像的亮度信息,以根据所述亮度信息进行曝光。The light metering module is configured to use a dynamic weighting method to measure light on the subject object to obtain brightness information of the preview image, so as to perform exposure according to the brightness information.
在一个实施例中,所述区域提取模块包括:第一提取单元和第二提取单元;所述第一提取单元,配置成对所述预览图像按照亮度进行划分处理,提取候选区域;所述第二提取单元,配置成将所述候选区域输入预先训练好的神经网络模型中进行区域划分处理,以提取主体区域。In one embodiment, the region extraction module includes: a first extraction unit and a second extraction unit; the first extraction unit is configured to divide the preview image according to brightness and extract candidate regions; the second The second extraction unit is configured to input the candidate area into a pre-trained neural network model for area division processing, so as to extract the subject area.
在一个实施例中,所述主体确定模块包括:获取单元、第一确定单元和第二确定单元;所述获取单元,配置成分别获取所述触摸测光区域在所述预览图像中对应的第一坐标位置和所述主体区域在所述预览图像中对应的第二坐标位置;所述第一确定单元,配置成确定第一坐标位置与第二坐标位置相匹配位置对应的区域;所述第二确定单元,配置成将匹配位置对应的区域确定为主体对象。In one embodiment, the subject determination module includes: an acquisition unit, a first determination unit, and a second determination unit; the acquisition unit is configured to respectively acquire the first corresponding touch metering area in the preview image A coordinate position and a second coordinate position corresponding to the subject area in the preview image; the first determining unit is configured to determine an area corresponding to a position where the first coordinate position matches the second coordinate position; the first A determining unit configured to determine the area corresponding to the matching position as the subject object.
在一个实施例中,所述测光模块包括:建立单元和第三确定单元;所述建立单元,配置成对主体对象所在的主体区域进行分割处理,以触摸测光区域为中心建立高斯分布式权重表;所述第三确定单元,配置成基于所述高斯分布式权重表,确定预览图像的亮度信息。In one embodiment, the photometry module includes: an establishment unit and a third determination unit; the establishment unit is configured to perform segmentation processing on the subject area where the subject object is located, and establish a Gaussian distribution centered on the touch photometry area. Weight table: the third determination unit is configured to determine brightness information of the preview image based on the Gaussian distributed weight table.
在一个实施例中,所述建立单元,具体配置成确定触摸测光区域的关联测光区域,关联测光区域为主体区域中除触摸测光区域外的其他区域;以触摸测光区域为中心,分别为触摸测光区域和关联测光区域分配测光权重值,以建立高斯分布式权重表。In one embodiment, the establishing unit is specifically configured to determine the associated photometric area of the touch photometric area, and the associated photometric area is an area other than the touch photometric area in the subject area; centered on the touch photometric area , respectively assign photometry weight values to the touch photometry area and the associated photometry area, so as to establish a Gaussian distributed weight table.
在一个实施例中,所述建立单元,还配置成为触摸测光区域分配最高测光权重值;以触摸测光区域为中心,按照关联测光区域与触摸测光区域的距离由近至远的规则,依次为关联测光区域由高至低分配对应的测光权重值。In one embodiment, the establishment unit is further configured to assign the highest photometric weight value to the touch metering area; with the touch metering area as the center, according to the distance between the associated photometry area and the touch metering area from near to far According to the rules, the corresponding metering weight values are assigned to the associated metering areas in descending order.
在一个实施例中,所述测光模块,具体配置成基于亮度信息,采用预设的曝光控制算法计算曝光时间和曝光增益;当检测到快门触发操作时,基于曝光时间和曝光增益进行曝光。In one embodiment, the light metering module is specifically configured to calculate exposure time and exposure gain by using a preset exposure control algorithm based on brightness information; when a shutter trigger operation is detected, perform exposure based on the exposure time and exposure gain.
一种计算机设备,包括存储器和一个或多个处理器,所述存储器存储有计算机可读指令,所述一个或多个处理器执行所述计算机可读指令时实现本公开任意实施例所提供的自动曝光方法的步骤。A computer device, including a memory and one or more processors, the memory stores computer-readable instructions, and the one or more processors execute the computer-readable instructions to implement the method provided by any embodiment of the present disclosure. Steps of the automatic exposure method.
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时实现本公开任意实施例所提供的自动曝光方法的步骤。One or more non-transitory computer-readable storage media having stored thereon computer-readable instructions that, when executed by one or more processors, implement any implementation of the present disclosure The steps of the automatic exposure method provided by the example.
本公开的其他特征和优点将在随后的说明书中阐述,并且,部分 地从说明书中变得显而易见,或者通过实施本公开而了解。本公开的目的和其他优点在说明书、权利要求书以及附图中所特别指出的结构来实现和获得,本公开的一个或多个实施例的细节在下面的附图和描述中提出。Additional features and advantages of the disclosure will be set forth in the description which follows, and, in part, will be obvious from the description, or may be learned by practice of the disclosure. The objectives and other advantages of the disclosure will be realized and attained by the structure particularly pointed out in the written description, claims hereof as well as the accompanying drawings, the details of one or more embodiments of the disclosure being set forth in the accompanying drawings and the description below.
为使本公开的上述目的、特征和优点能更明显易懂,下文特举可选实施例,并配合所附附图,作详细说明如下。In order to make the above objects, features and advantages of the present disclosure more comprehensible, optional embodiments are given below and described in detail in conjunction with the accompanying drawings.
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:Other characteristics, objects and advantages of the present disclosure will become more apparent by reading the detailed description of non-limiting embodiments made with reference to the following drawings:
图1为本公开一个或多个实施例提供的自动曝光方法的应用场景图;FIG. 1 is an application scene diagram of an automatic exposure method provided by one or more embodiments of the present disclosure;
图2为本公开一个或多个实施例提供的自动曝光方法的流程示意图;FIG. 2 is a schematic flowchart of an automatic exposure method provided by one or more embodiments of the present disclosure;
图3为本公开一个或多个实施例提供的卷积神经网络模型的结构示意图;FIG. 3 is a schematic structural diagram of a convolutional neural network model provided by one or more embodiments of the present disclosure;
图4为本公开一个或多个实施例提供的确定预览图像的亮度信息方法的流程示意图;FIG. 4 is a schematic flowchart of a method for determining brightness information of a preview image provided by one or more embodiments of the present disclosure;
图5为本公开一个或多个实施例提供的以触摸测光区域为中心建立高斯分布式权重表的示意图;FIG. 5 is a schematic diagram of establishing a Gaussian distributed weight table centered on a touch metering area provided by one or more embodiments of the present disclosure;
图6为本公开一个或多个实施例提供的高斯分布式权重表的示意图;FIG. 6 is a schematic diagram of a Gaussian distributed weight table provided by one or more embodiments of the present disclosure;
图7为本公开一个或多个实施例提供的自动曝光方法的流程示意图;FIG. 7 is a schematic flowchart of an automatic exposure method provided by one or more embodiments of the present disclosure;
图8为本公开一个或多个实施例提供的自动曝光装置的结构示意图;FIG. 8 is a schematic structural diagram of an automatic exposure device provided by one or more embodiments of the present disclosure;
图9为本公开一个或多个实施例提供的一种计算机设备的结构示意图。Fig. 9 is a schematic structural diagram of a computer device provided by one or more embodiments of the present disclosure.
为了使本公开的目的、技术方案及优点更加清楚明白,以下结合附图和实施例,对本公开作进一步的详细说明。应当理解的是,此处所描述的具体实施例仅仅用于解释本公开,并不用于限定本公开。另外还需要说明的是,为了便于描述,附图中仅示出了与发明相关的部分。In order to make the purpose, technical solutions and advantages of the present disclosure clearer, the present disclosure will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present disclosure, not to limit the present disclosure. It should also be noted that, for ease of description, only parts related to the invention are shown in the drawings.
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本公开。为了便于理解,下面对本公开实施例涉及的一些技术术语进行解释:It should be noted that, in the case of no conflict, the embodiments in the present disclosure and the features in the embodiments can be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings and embodiments. For ease of understanding, some technical terms involved in the embodiments of the present disclosure are explained below:
自动曝光:(Automatic Exposure,简称AE),是指相机根据光线的强弱自动调整曝光量,防止曝光过度或者不足。自动曝光的目 的是在不同的照明条件和场景中实现欣赏亮度级别或所谓的目标亮度级别,从而捕获的视频或图像既不太暗也不太量,为了达到这个目的,要调整镜头孔径,传感器曝光时间,传感器模拟增益和传感器/ISP数字增益,这个过程称为自动曝光。Automatic exposure: (Automatic Exposure, referred to as AE), refers to the camera automatically adjusts the exposure according to the intensity of light to prevent overexposure or underexposure. The purpose of auto exposure is to achieve an appreciative brightness level or so-called target brightness level in different lighting conditions and scenes, so that the captured video or image is neither too dark nor too bulky, in order to achieve this, the lens aperture, sensor Exposure time, sensor analog gain and sensor/ISP digital gain, the process is called auto exposure.
卷积神经网络:(Convolutional Neural Network,简称CNN)是一种包含卷积计算且具有深度结构的前馈神经网络,是深度学习的代表算法之一,具有表征学习能力,由一个或多个卷积层和全连接层组成,同时也包括关联权重和池化层。Convolutional Neural Network: (Convolutional Neural Network, referred to as CNN) is a feed-forward neural network that includes convolutional calculations and has a deep structure. It is one of the representative algorithms for deep learning. It consists of product layers and fully connected layers, as well as associated weights and pooling layers.
特征提取:是指使用计算机提取图像中属于特征性的信息的方法及过程。在机器学习、模式识别和图像处理中,特征提取是从初始的一组测量数据开始,并建立旨在提供信息和非冗余的派生值(特征),从而促进后续的学习和泛化步骤。Feature extraction: refers to the method and process of using a computer to extract characteristic information in an image. In machine learning, pattern recognition, and image processing, feature extraction starts from an initial set of measurement data and builds derived values (features) that are intended to be informative and non-redundant, thereby facilitating subsequent learning and generalization steps.
可以理解,在终端设备智能化程度不断增加的过程中,相机功能也已经越来越普遍地应用在人们的日常生活中。其中,高质量的影像需要以准确的曝光为基准,而准确的曝光又离不开准确测光这个前提,它能为曝光控制提供依据。测光是指测定被摄对象反射回来的光亮度,也称之为反射式测光,用于相机对光线状况的评估。It can be understood that in the process of increasing intelligence of terminal equipment, the camera function has been more and more commonly used in people's daily life. Among them, high-quality images need to be based on accurate exposure, and accurate exposure is inseparable from the premise of accurate metering, which can provide a basis for exposure control. Metering refers to the measurement of the brightness of the light reflected from the subject, also known as reflective metering, which is used to evaluate the light conditions of the camera.
目前,相关技术中可以采用平均测光法、中心权重测光法以及点测光法对拍摄画面进行测光。其中,平均测光法是指将画面划分为多个区域,每个区域进行独立测光,然后依次计算出整个画面的测光平均值;中心权重测光法是指测光模式重点在画面的中央区域测光,然后对整个场景进行平均起来测光。然而,对于一些特殊场景,例如需要拍摄的主体和背景的亮暗差别大的场景,上述平均测光法和中心权重测光法不能使拍摄主体获得合适的曝光效果。点测光法是指对一个点进行测光,该点通常是指整个画面中心,但是上述点测光法对测光点的选择要求较高,对于普通用户而言,如果不能选择合适的测光点,会导致拍摄画面容易出现过曝或欠曝的情况,从而影响拍摄照片的清晰度。At present, in the related art, an average photometry method, a center-weighted photometry method, and a spot photometry method can be used to meter a photographed picture. Among them, the average light metering method refers to dividing the picture into multiple areas, each area is independently metered, and then calculates the average light metering value of the entire picture in turn; the center weighted light metering method means that the light metering mode focuses on the center of the picture. The central area is metered, and then the entire scene is averaged. However, for some special scenes, such as scenes where there is a large difference between the brightness and darkness of the subject and the background, the above-mentioned average photometry method and center-weighted photometry method cannot obtain a suitable exposure effect for the subject. The spot metering method refers to the metering of a point, which usually refers to the center of the entire screen, but the above-mentioned spot metering method has high requirements for the selection of the metering point. For ordinary users, if the appropriate metering point cannot be selected Spots of light will cause overexposure or underexposure of the shooting screen, which will affect the clarity of the photos.
基于上述缺陷,本公开提供了一种自动曝光方法、装置、设备及存储介质,与现有技术相比,该方法能够通过神经网络模型精确地提取到主体区域,并结合触摸测光区域确定出主体对象,提高了主体对象确定的准确性,以及通过采用动态权重法对主体对象进行测光统计,从而保证了在主体与背景明暗差异较大的场景中对主体对象进行合适的曝光,进而避免产生拍摄画面过曝或欠曝的情况,提高了照片拍摄的清晰度。Based on the above defects, the present disclosure provides an automatic exposure method, device, equipment and storage medium. Compared with the prior art, this method can accurately extract the subject area through the neural network model, and determine the The main object improves the accuracy of determining the main object, and by using the dynamic weight method to measure the main object, it ensures the appropriate exposure of the main object in the scene with a large difference between the main object and the background light and dark, thereby avoiding The situation of overexposure or underexposure of the shooting picture is generated, and the clarity of photo shooting is improved.
本公开实施例提供的自动曝光方法可以应用于如图1所示的应用环境中。图1为一个实施例中自动曝光方法的应用场景图,该应用环境包括100终端设备.其中,该终端设备100可以具有图像采集功能的终端设备。该终端设备包括但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。The automatic exposure method provided by the embodiment of the present disclosure may be applied in the application environment as shown in FIG. 1 . FIG. 1 is an application scene diagram of an automatic exposure method in an embodiment. The application environment includes a terminal device 100. The terminal device 100 may be a terminal device with an image acquisition function. The terminal devices include but are not limited to various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices.
上述终端设备100配置成获取预览图像,并将预览图像输入预 先训练好的神经网络模型中,提取主体区域;当检测到屏幕触摸操作时,确定预览图像的触摸测光区域;基于触摸测光区域和主体区域,确定主体对象;采用动态权重法对主体对象进行测光,得到预览图像的亮度信息,以根据亮度信息进行曝光。The above-mentioned terminal device 100 is configured to obtain a preview image, input the preview image into a pre-trained neural network model, and extract the subject area; when a screen touch operation is detected, determine the touch metering area of the preview image; and the main area to determine the main object; the dynamic weight method is used to measure the main object to obtain the brightness information of the preview image, so as to perform exposure according to the brightness information.
其中,上述终端设备获取预览图像的方式,可以包括但不限于通过电荷耦合器件(Charge-coupled Device,简称CCD)的感光原件进行感光获取或者CMOS(英文全称为Complementary Metal-Oxide Semiconductor)的感光原件感光获取。Among them, the above-mentioned way of obtaining the preview image by the terminal device may include, but not limited to, photosensitive acquisition through a charge-coupled device (Charge-coupled Device, referred to as CCD) photosensitive element or a CMOS (English full name is Complementary Metal-Oxide Semiconductor) photosensitive element Photosensitive acquisition.
为了便于理解和说明,下面通过图2至图9详细阐述本公开实施例提供的自动曝光方法、装置、设备及存储介质。For ease of understanding and description, the automatic exposure method, device, equipment and storage medium provided by the embodiments of the present disclosure are described in detail below with reference to FIG. 2 to FIG. 9 .
图2所示为本公开实施例提供的自动曝光方法的流程示意图,该方法应用于终端设备,如图2所示,该方法包括:FIG. 2 is a schematic flowchart of an automatic exposure method provided by an embodiment of the present disclosure. The method is applied to a terminal device. As shown in FIG. 2 , the method includes:
S101、获取预览图像。S101. Acquire a preview image.
需要说明的是,上述预览图像是指在曝光之前,在终端设备的图像预览区域中显示的待拍摄对象的图像,例如,对某个人物或风景进行拍摄时,终端设备运行相机功能后,会在拍摄界面中的某个区域形成该任务的图像,供用户参考。此时,该拍摄界面中显示图像的区域为图像预览区域,显示的人物或风景图像即为预览图像。It should be noted that the above preview image refers to the image of the object to be photographed displayed in the image preview area of the terminal device before exposure, for example, when photographing a person or landscape, after the terminal device runs the camera function, it will An image of the task is formed in a certain area of the shooting interface for the user's reference. At this time, the area where the image is displayed in the shooting interface is the image preview area, and the displayed person or landscape image is the preview image.
可选的,上述待拍摄对象可以为人物、风景、动物、物体等,该物体例如可以是房子或汽车。Optionally, the above-mentioned object to be photographed may be a person, a landscape, an animal, an object, etc., and the object may be a house or a car, for example.
本步骤中,终端设备可以接收用户输入的触发指令,根据触发指令开启终端设备上对应的拍摄应用程序,该应用程序例如可以为相机,通过摄像头模组采集图像信息,从而在终端设备的屏幕上形成一个图像预览区域,并在该图像预览区域中形成待拍摄对象的预览图像。In this step, the terminal device can receive the trigger instruction input by the user, and start the corresponding shooting application program on the terminal device according to the trigger instruction. An image preview area is formed, and a preview image of the object to be photographed is formed in the image preview area.
示例性地,例如用户使用智能手机对某处风景进行拍摄时,首先在智能手机接收到打开“相机”应用程序的命令,然后智能手机自动打开该“相机”应用程序,接着智能手机在屏幕上形成一个图像预览区域,并调用摄像头对待拍摄对象进行拍摄,以在图像预览区域形成该待拍摄对象的预览图像,从而使得终端设备获取到预览图像。Exemplarily, for example, when a user uses a smart phone to take pictures of a certain scenery, the smart phone first receives an order to open the "Camera" application, and then the smart phone automatically opens the "Camera" application, and then the smart phone opens the "Camera" application on the screen. An image preview area is formed, and the camera is called to shoot the object to be photographed, so as to form a preview image of the object to be photographed in the image preview area, so that the terminal device obtains the preview image.
S102、将预览图像输入预先训练好的神经网络模型中,提取主体区域,神经网络模型配置成对预览图像进行边缘检测和区域划分。S102. Input the preview image into a pre-trained neural network model to extract a subject area, and the neural network model is configured to perform edge detection and area division on the preview image.
本步骤中,在获取到预览图像后,可以对预览图像按照亮度划分处理,提取候选区域,并将候选区域输入预先训练好的神经网络模型中进行区域划分处理,以提取主体区域。可选的,上述神经网络模型可以是卷积神经网络模型。In this step, after the preview image is obtained, the preview image can be divided and processed according to the brightness to extract candidate areas, and the candidate areas can be input into the pre-trained neural network model for area division processing to extract the main body area. Optionally, the aforementioned neural network model may be a convolutional neural network model.
其中,在对预览图像按照亮度划分处理时,可以采用预设的卷积神经网络模型进行亮度划分处理,以提取候选区域,该卷积神经网络模型可以是预先对初始卷积神经网络模型中的模型参数不断进行训练后得到的。也可以通过获取预览图像中每个像素点的亮度值,从而按照该亮度值对预览图像进行亮度划分处理以提取候选区域。Wherein, when the preview image is divided and processed according to brightness, a preset convolutional neural network model can be used to perform brightness division processing to extract candidate regions. The model parameters are obtained after continuous training. It is also possible to obtain the brightness value of each pixel in the preview image, and perform brightness division processing on the preview image according to the brightness value to extract candidate regions.
可以理解的是,请参见图3所示,上述卷积神经网络模型可以至少包括卷积层、池化层和全连接层。其中,卷积层是通过卷积核的过滤提取出图片中局部的特征;池化层的作用是下采样,降维、去除冗余信息,对特征进行压缩、简化网络复杂度、较小计算量、较小内存消耗等,通过池化层进行处理能够有效地缩小参数矩阵的尺寸,从而减少最后全连接层中的参数数量,可以加快计算速度和防止过拟合的作用;全连接层主要是用于进行分类的作用,以输出对应的结果。全连接层可以是多个或一个,卷积层也可以是多个或一个。It can be understood that, as shown in FIG. 3 , the above convolutional neural network model may at least include a convolutional layer, a pooling layer and a fully connected layer. Among them, the convolution layer is to extract the local features in the picture through the filtering of the convolution kernel; the role of the pooling layer is to downsample, reduce dimensionality, remove redundant information, compress features, simplify network complexity, and reduce calculations. amount, small memory consumption, etc., processing through the pooling layer can effectively reduce the size of the parameter matrix, thereby reducing the number of parameters in the last fully connected layer, which can speed up the calculation and prevent overfitting; the fully connected layer mainly It is used for classification to output corresponding results. There can be multiple or one fully connected layers, and multiple or one convolutional layers.
该实施例中,在得到候选区域后,将候选区域输入预先训练好的神经网络模型中进行区域划分处理的过程中,可以将候选区域依次经过卷积层、池化层和全连接层进行区域划分处理,从而确定主体区域。In this embodiment, after the candidate area is obtained, the candidate area is input into the pre-trained neural network model for area division processing, and the candidate area can be sequentially processed through the convolutional layer, the pooling layer, and the fully connected layer. The processing is divided so that the subject area is determined.
具体地,可以先对候选区域进行预处理,得到预处理后的候选区域,然后将其输入至卷积层进行特征提取处理,得到卷积层的输出结果,并将卷积层的输出结果进行非线性映射处理后输入池化层进行下采样,得到池化层的输出结果,并将池化层的输出结果输入全连接层进行处理,从而提取到得到主体区域。其中,该主体区域可以包括主体轮廓和主体尺寸。Specifically, the candidate area can be preprocessed first to obtain the preprocessed candidate area, and then input to the convolutional layer for feature extraction processing to obtain the output result of the convolutional layer, and the output result of the convolutional layer is processed After nonlinear mapping processing, input the pooling layer for down-sampling to obtain the output result of the pooling layer, and input the output result of the pooling layer into the fully connected layer for processing, thereby extracting the main body area. Wherein, the body area may include body outline and body size.
可选的,在通过池化层进行处理的过程中,可以通过平均池化或最大池化处理。其中,平均池化(average pooling)是指计算图像区域的平均值作为该区域池化后的值。最大池化(max pooling)是指选图像区域的最大值作为该区域池化后的值。Optionally, in the process of processing by the pooling layer, average pooling or maximum pooling may be used for processing. Among them, the average pooling (average pooling) refers to calculating the average value of the image area as the pooled value of the area. Max pooling refers to selecting the maximum value of the image area as the pooled value of the area.
上述卷积神经网络可以通过如下步骤进行训练得到:先对网络进行权值的初始化,以构建初始卷积神经网络模型,获取历史图像和已经划分的主体区域,然后将该历史图像输入卷积层、池化层和全连接层进行处理,从而得到主体区域的输出值,并计算主体区域的输出值和已经划分的主体区域的目标值之间的误差,得到损失函数,按照损失函数最小化对网络模型中的全直参数进行优化更新处理,从而得到训练好的卷积神经网络。The above convolutional neural network can be trained through the following steps: first initialize the weight of the network to build the initial convolutional neural network model, obtain the historical image and the divided main body area, and then input the historical image into the convolutional layer , pooling layer and fully connected layer to obtain the output value of the main area, and calculate the error between the output value of the main area and the target value of the divided main area to obtain the loss function, and minimize the pair according to the loss function The all-straight parameters in the network model are optimized and updated to obtain a trained convolutional neural network.
本实施例中通过卷积神经网络提取主体区域,能够有效地将大数据量的图片降维成小数据量,且有效保留图像特征,可以避免测光时背景区域内的图像点对主体区域内的图像点产生不必要的影响,从而很大程度上提高了对主体区域内的图像点进行测光的准确性。In this embodiment, the convolutional neural network is used to extract the main body area, which can effectively reduce the dimensionality of large data volume pictures into small data volumes, and effectively preserve image features, and can avoid image points in the background area from being distorted in the main area during photometry. The image points in the subject area are unnecessarily affected, thereby greatly improving the accuracy of metering the image points in the subject area.
S103、当检测到屏幕触摸操作时,确定预览图像的触摸测光区域。S103. When a screen touch operation is detected, determine a touch metering area of the preview image.
具体地,在获取到预览图像后,用户可以点击终端设备屏幕上的预览图像,从而使得终端设备检测到屏幕触摸操作,由于终端设备的触摸屏上包括一系列传感器,可以检测手指引起的电容变化,当用户的手指触摸屏幕时,会影响每个传感器的自电容以及它们之 间的互电容,因此,可以通过检测电容的变化进行触摸测光区域检测,将电容发生变化的区域确定为预览图像的触摸测光区域。Specifically, after obtaining the preview image, the user can click on the preview image on the screen of the terminal device, so that the terminal device detects a touch operation on the screen. Since the touch screen of the terminal device includes a series of sensors, it can detect the capacitance change caused by the finger, When the user's finger touches the screen, it will affect the self-capacitance of each sensor and the mutual capacitance between them. Therefore, the touch metering area can be detected by detecting the change of capacitance, and the area where the capacitance changes can be determined as the area of the preview image. Touch the metering area.
S104、基于触摸测光区域和主体区域,确定主体对象。S104. Determine the subject object based on the touch metering area and the subject area.
具体地,在确定出触摸测光区域和主体区域后,可以根据预设算法确定需要进行拍摄的主体对象,通过分别获取触摸测光区域在预览图像中对应的第一坐标位置和主体区域在预览图像中对应的第二坐标位置,然后确定第一坐标位置与第二坐标位置相匹配位置对应的区域,并将匹配位置对应的区域确定为主体对象。Specifically, after the touch metering area and the main body area are determined, the subject object to be photographed can be determined according to a preset algorithm, and the corresponding first coordinate position of the touch metering area in the preview image and the main body area in the preview image are obtained respectively. The corresponding second coordinate position in the image, and then determine the area corresponding to the matching position between the first coordinate position and the second coordinate position, and determine the area corresponding to the matching position as the main object.
需要说明的是,本实施例中的主体对象,是指终端设备在曝光时需要重点对待的对象,比如在采用AE收敛算法进行曝光时,主体对象即是该算法中亮度权重值最大的对象。可选的,本实施例中主体对象可以为具体对象,例如人物脸部、人体、或者某个物体等,也可以是预览图像中的某个区域,比如某处风景的一个区域、人物及周边区域等。It should be noted that the subject object in this embodiment refers to the object that the terminal device needs to focus on during exposure. For example, when the AE convergence algorithm is used for exposure, the subject object is the object with the largest brightness weight value in the algorithm. Optionally, the main object in this embodiment can be a specific object, such as a person's face, human body, or an object, or a certain area in the preview image, such as an area of a certain scenery, a person and its surroundings area etc.
示例性地,当用户使用触摸屏智能手机进行拍照的过程中,用户打开相机应用程序,并在图像预览区域中形成预览图像,可以通过预先训练好的卷积神经网络模型对预览图像进行区域划分处理,从而得到主体区域,然后用户在预览图像中执行点击操作后,智能手机会根据用户点击的位置确定预览图像中的触摸测光区域,可以在屏幕界面上形成一个选取框,此时将该选取框与主体区域相匹配位置对应的区域确定为主体对象。For example, when a user uses a touch screen smartphone to take pictures, the user opens the camera application and forms a preview image in the image preview area, and the preview image can be divided into regions through a pre-trained convolutional neural network model , so as to obtain the subject area, and then after the user performs a click operation in the preview image, the smart phone will determine the touch metering area in the preview image according to the position clicked by the user, and a marquee can be formed on the screen interface. The area corresponding to the position where the frame matches the subject area is determined as the subject object.
S105、采用动态权重法对主体对象进行测光,得到预览图像的亮度信息,以根据亮度信息进行曝光。S105. Using a dynamic weighting method to perform light metering on the subject object to obtain brightness information of the preview image, so as to perform exposure according to the brightness information.
具体地,在确定出主体对象后,可以根据主体对象所反射的光线对主体对象进行测光,以得到预览图像的亮度信息。Specifically, after the main object is determined, light metering may be performed on the main object according to the light reflected by the main object, so as to obtain brightness information of the preview image.
作为一种可选的实施方式,在上述实施例的基础上,图4为本公开实施例提供的确定预览图像的亮度信息方法的流程示意图,如图4所示,该方法包括:As an optional implementation manner, on the basis of the above embodiments, FIG. 4 is a schematic flowchart of a method for determining brightness information of a preview image provided by an embodiment of the present disclosure. As shown in FIG. 4 , the method includes:
S201、对主体对象所在的主体区域进行分割处理,以触摸测光区域为中心建立高斯分布式权重表。S201. Perform segmentation processing on the subject area where the subject object is located, and establish a Gaussian distributed weight table centered on the touch metering area.
S202、基于高斯分布式权重表,确定预览图像的亮度信息。S202. Determine brightness information of the preview image based on the Gaussian distributed weight table.
具体地,可以对主体对象所在的主体区域进行分割处理,例如可以是采用平均分割方式,确定触摸测光区域的关联测光区域,该关联测光区域为主体区域中除触摸测光区域外的其他区域,然后以触摸测光区域为中心,分别为触摸测光区域和关联测光区域分配测光权重值,以建立高斯分布式权重表。Specifically, the main body area where the main object is located may be segmented, for example, an average segmentation method may be used to determine the associated photometric area of the touch photometric area, and the associated photometric area is the part of the main body area other than the touch photometric area. For other areas, with the touch photometry area as the center, assign photometry weight values to the touch photometry area and the associated photometry area respectively, so as to establish a Gaussian distributed weight table.
图5为以触摸测光区域为中心建立高斯分布式权重表的示意图,请参见图5所示,上述图5中包括获取的预览图像、通过卷积神经网络模型处理后提取到的主体区域、检测到的触摸测光区域以及关联测光区域,该关联测光区域为主体区域中除触摸测光区域外的其他区域,然后对主体对象所在的主体区域进行分割处理,以触 摸测光区域为中心建立高斯分布式权重表。Figure 5 is a schematic diagram of establishing a Gaussian distributed weight table centered on the touch metering area, please refer to Figure 5, the above Figure 5 includes the acquired preview image, the main body area extracted after processing through the convolutional neural network model, The detected touch photometric area and the associated photometric area, the associated photometric area is other areas in the main body area except the touch photometric area, and then the main body area where the main object is located is segmented, with the touch photometric area as The center establishes a Gaussian distributed weight table.
可以理解的是,由于触摸测光区域的亮度信息与关联测光区域的亮度信息的重要程度不同,所以为其分配不同的测光权重值,即触摸测光区域的测光权重值要比其他关联测光区域的测光权重值高。其中,触摸测光区域的测光权重值要比关联测光区域的测光权重值高。It can be understood that since the brightness information of the touch metering area is different from the brightness information of the associated photometering area, different photometry weight values are assigned to it, that is, the photometry weight value of the touch photometer area is higher than that of other photometry areas. The metering weight value of the associated metering area is high. Wherein, the photometry weight value of the touch photometry area is higher than the photometry weight value of the associated photometry area.
本实施例中,图6为本公开实施例提供的建立的高斯分布式权重表的示意图,请参见图6所示,可以为触摸测光区域分配最高测光权重值,然后以触摸测光区域为中心,按照关联测光区域与触摸测光区域的距离由近至远的规则,依次为关联测光区域按照由高至低的顺序分配对应的测光权重值。例如,为触摸测光区域分配的亮度信息的测光权重值为100%,与触摸测光区域之间的距离由近至远的关联测光区域的亮度信息的测光权重值依次为90%、80%、60%、40%、20%。In this embodiment, FIG. 6 is a schematic diagram of the established Gaussian distributed weight table provided by the embodiment of the present disclosure. Please refer to FIG. As the center, according to the rule that the distance between the associated photometry area and the touch photometry area is from shortest to farthest, the corresponding photometry weight values are assigned to the associated photometry areas in descending order. For example, the photometric weight value of the brightness information assigned to the touch photometric area is 100%, and the photometric weight values of the brightness information of the associated photometric areas from the shortest to the farthest from the touch photometric area are 90% in turn. , 80%, 60%, 40%, 20%.
进一步地,在建立高斯分布式权重表后,可以通过测光获取触摸测光区域和关联测光区域的亮度值,可以采用内测光或者外测光的方式进行测光。其中,在外测光方式中,测光元件与镜头的光路是各自独立的,这种测光方式广泛应用于平视取景镜头快门照相机中;内测光方式是通过镜头来进行测光。Further, after the Gaussian distributed weight table is established, the brightness values of the touch metering area and the associated photometering area can be obtained through photometry, and the photometry can be performed by internal photometry or external photometry. Among them, in the external light metering method, the optical path of the light metering element and the lens is independent, and this light metering method is widely used in head-up viewfinder lens shutter cameras; the internal light metering method is to measure light through the lens.
在确定触摸测光区域和关联测光区域的亮度值后,可以将高斯分布式权重表和亮度值进行加权求和处理,得到预览图像的亮度信息。该亮度信息可以是亮度值。After determining the luminance values of the touch metering area and the associated photometering area, the Gaussian distributed weight table and the luminance values can be weighted and summed to obtain the luminance information of the preview image. The brightness information may be a brightness value.
示例性地,例如触摸测光区域测光后得到的亮度值为X,关联测光区域可以为五个,与触摸测光区域之间的距离由近至远的五个关联测光区域的亮度值依次为Y、Z、H、G、K,且触摸测光区域的亮度信息的测光权重值为100%,与触摸测光区域之间的距离由近至远的关联测光区域的亮度信息的测光权重值依次为90%、80%、60%、40%、20%,则整个预览画面最终的亮度信息为100%*X+90%*Y+80%*Z+60%*H+40%*G+20%*K。Exemplarily, for example, the luminance value obtained after touching the photometry area for photometry is X, and there may be five associated photometry areas. The values are Y, Z, H, G, K in turn, and the light metering weight value of the brightness information of the touch light metering area is 100%, and the brightness of the associated light metering area with the distance from the touch light metering area from near to far The metering weight values of the information are 90%, 80%, 60%, 40%, and 20%, then the final brightness information of the entire preview screen is 100%*X+90%*Y+80%*Z+60%* H+40%*G+20%*K.
进一步地,在确定出亮度信息后,可以根据亮度信息进行曝光。Further, after the brightness information is determined, exposure can be performed according to the brightness information.
需要说明的是,曝光是用来计算从景物到达相机的光通量大小的物理量。图像传感器只有获得正确的曝光,才能得到高质量的照片曝光过度,图像看起来太亮;曝光不足,则图像看起来太暗。其中,到达传感器的光通量的大小主要由两方面因素决定:曝光时间的长短以及光圈的大小。It should be noted that exposure is a physical quantity used to calculate the luminous flux from the scene to the camera. The image sensor can only get high-quality photos if it gets the correct exposure. If it is overexposed, the image will look too bright; if it is underexposed, the image will look too dark. Among them, the size of the luminous flux reaching the sensor is mainly determined by two factors: the length of the exposure time and the size of the aperture.
具体地,当终端设备获取到预览图像的亮度信息时,可以将亮度信息作为测光结果,基于亮度信息,采用预设的曝光控制算法计算曝光时间和曝光增益,当检测到用户的快门触发操作时,基于曝光时间和曝光增益进行曝光。可选的,上述曝光控制算法可以是AE算法。在基于曝光时间和曝光增益进行曝光的过程中,可以在感光度(International Standardization Organization,简称ISO)不变的情 况下,通过调整相机光圈大小或快门速度来实现对图像亮度的调整,从而进行曝光控制,并通过ISP图像传感器进行处理,使得终端设备上显示出曝光调整后的照片。Specifically, when the terminal device obtains the brightness information of the preview image, it can use the brightness information as the photometry result, and use the preset exposure control algorithm to calculate the exposure time and exposure gain based on the brightness information. When the user's shutter trigger operation is detected , exposure is performed based on exposure time and exposure gain. Optionally, the above exposure control algorithm may be an AE algorithm. In the process of exposure based on exposure time and exposure gain, the brightness of the image can be adjusted by adjusting the camera aperture size or shutter speed while keeping the sensitivity (International Standardization Organization, ISO) constant, so as to perform exposure It is controlled and processed by the ISP image sensor, so that the photos after exposure adjustment are displayed on the terminal device.
可以理解的是,上述AE算法可以包括三个步骤,分别为:第一,对当前sensor参数设置的亮度,进行亮度统计;第二,分析猜测当前亮度来确定合适亮度;第三,改变曝光设置,循环前几个步骤来保持曝光的亮度。It can be understood that the above AE algorithm may include three steps, which are: first, perform brightness statistics on the brightness set by the current sensor parameter; second, analyze and guess the current brightness to determine the appropriate brightness; third, change the exposure setting , to cycle through the first few steps to maintain the brightness of the exposure.
本实施例中通过结合卷积神经网络模型从而精确的确定出主体区域,能够在满足特殊场景对拍摄主体对象得到准确的曝光的同时,而且能够针对不同的主体对象进行对应的补偿曝光,从而获取更好的拍摄体验和拍摄效果。In this embodiment, the subject area is accurately determined by combining the convolutional neural network model, which can not only meet the requirements of special scenes to obtain accurate exposure of the subject object, but also perform corresponding compensation exposure for different subject objects, thereby obtaining Better shooting experience and shooting effect.
请参见图7所示,以终端设备为智能手机为例,当需要对某个人物或风景进行拍摄时,终端设备运行相机功能后,通过摄像头模组采集图像信息,从而在终端设备的屏幕上形成一个图像预览区域,并在该图像预览区域中形成待拍摄对象的预览图像,可以对预览图像按照亮度进行划分处理,提取候选区域,并将候选区域输入预先训练好的CNN网络模型中进行区域划分处理,以提取主体区域,该主体区域可以包括主体轮廓和主体尺寸大小。同时当用户点击屏幕时,终端设备检测到用户的屏幕触摸操作,确定预览图像的触摸测光区域,然后通过后台算法根据触摸测光区域猜测想要拍摄的主体对象,可以通过CNN网络模型提取的主体区域对应的第二坐标位置和触摸测光区域对应的第一坐标位置,确定出相匹配位置对应的区域,并将匹配位置对应的区域确定为主体对象。然后采用动态权重法对主体对象进行测光,以得到预览图像的亮度信息,具体可以是对主体对象所在的主体区域进行分割处理,确定触摸测光区域的关联测光区域,然后以触摸测光区域为中心,分别为触摸测光区域和关联测光区域分配测光权重值,以建立高斯分布式权重表。在确定出亮度信息后,采用预设的曝光控制算法AE算法进行曝光控制调整,并通过ISP图像传感器进行处理,使得终端设备上显示出曝光调整后的照片。Please refer to Figure 7, taking the terminal device as a smart phone as an example, when it is necessary to take a picture of a person or scenery, the terminal device runs the camera function, collects image information through the camera module, and displays it on the screen of the terminal device Form an image preview area, and form a preview image of the object to be photographed in the image preview area. The preview image can be divided and processed according to the brightness, and the candidate area can be extracted, and the candidate area can be input into the pre-trained CNN network model. Segmentation processing is performed to extract a subject area, which may include a subject outline and a subject size. At the same time, when the user taps the screen, the terminal device detects the user's screen touch operation, determines the touch metering area of the preview image, and then uses the background algorithm to guess the main object to be photographed based on the touch metering area, which can be extracted through the CNN network model The second coordinate position corresponding to the subject area and the first coordinate position corresponding to the light metering area are touched to determine an area corresponding to the matching position, and determine the area corresponding to the matching position as the main object. Then use the dynamic weighting method to measure the light of the main object to obtain the brightness information of the preview image. Specifically, it can segment the main area where the main object is located, determine the associated light metering area of the touch light metering area, and then use the touch light metering The region is the center, and the photometry weight values are respectively assigned to the touch metering region and the associated metering region to establish a Gaussian distributed weight table. After the brightness information is determined, the preset exposure control algorithm AE algorithm is used to adjust the exposure control, and the ISP image sensor is used for processing, so that the terminal device displays the photos after exposure adjustment.
本公开实施例提供的自动曝光方法,通过获取预览图像,并将预览图像输入预先训练好的神经网络模型中,提取主体区域,该神经网络模型配置成对预览图像进行边缘检测和区域划分,当检测到屏幕触摸操作时,确定预览图像的触摸测光区域,基于触摸测光区域和主体区域,确定主体对象,采用动态权重法对主体对象进行测光,得到预览图像的亮度信息,以根据亮度信息进行曝光。该方法能够通过神经网络模型精确地提取到主体区域,并结合触摸测光区域确定出主体对象,提高了主体对象确定的准确性,以及通过采用动态权重法对主体对象进行测光统计,从而保证了在主体与背景明暗差异较大的场景中对主体对象进行合适的曝光,进而避免产生拍摄画面过曝或欠曝的情况,提高了照片拍摄的清晰度。The automatic exposure method provided by the embodiments of the present disclosure obtains a preview image and inputs the preview image into a pre-trained neural network model to extract the subject area. The neural network model is configured to perform edge detection and area division on the preview image. When a screen touch operation is detected, determine the touch metering area of the preview image, determine the main object based on the touch metering area and the main area, use the dynamic weighting method to measure the main object, and obtain the brightness information of the preview image, and use it according to the brightness Information is exposed. This method can accurately extract the subject area through the neural network model, and determine the subject object in combination with the touch metering area, which improves the accuracy of determining the subject object. In order to properly expose the main object in a scene with a large difference between the light and shade of the main body and the background, thereby avoiding the situation of over-exposure or under-exposure of the shooting picture, and improving the clarity of photo shooting.
应该理解的是,虽然图2-5的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-5中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various steps in the flow charts of FIGS. 2-5 are shown sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in Figures 2-5 may include a plurality of sub-steps or stages, these sub-steps or stages are not necessarily executed at the same time, but may be executed at different times, these sub-steps or stages The order of execution is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
另一方面,图8为本公开实施例提供的一种自动曝光装置的结构示意图。该装置可以为终端设备内的装置,如图8所示,该装置600包括:On the other hand, FIG. 8 is a schematic structural diagram of an automatic exposure device provided by an embodiment of the present disclosure. The device may be a device in a terminal device. As shown in FIG. 8, the device 600 includes:
获取模块610,配置成获取预览图像;An acquisition module 610 configured to acquire a preview image;
区域提取模块620,配置成将预览图像输入预先训练好的神经网络模型中,提取主体区域,神经网络模型配置成对预览图像进行边缘检测和区域划分;The area extraction module 620 is configured to input the preview image into a pre-trained neural network model to extract the subject area, and the neural network model is configured to perform edge detection and area division on the preview image;
区域确定模块630,配置成当检测到屏幕触摸操作时,确定预览图像的触摸测光区域;The area determination module 630 is configured to determine the touch metering area of the preview image when a screen touch operation is detected;
主体确定模块640,配置成基于触摸测光区域和主体区域,确定主体对象;The subject determination module 640 is configured to determine the subject object based on the touch metering area and the subject area;
测光模块650,配置成采用动态权重法对主体对象进行测光,得到预览图像的亮度信息,以根据亮度信息进行曝光。The light metering module 650 is configured to use a dynamic weighting method to measure the light of the main object to obtain brightness information of the preview image, so as to perform exposure according to the brightness information.
可选的,上述区域提取模块620,包括:Optionally, the above region extraction module 620 includes:
第一提取单元621,配置成对预览图像按照亮度划分处理,提取候选区域;The first extraction unit 621 is configured to divide and process the preview image according to brightness, and extract candidate regions;
第二提取单元622,配置成将候选区域输入预先训练好的神经网络模型中进行区域划分处理,以提取主体区域。The second extraction unit 622 is configured to input the candidate area into a pre-trained neural network model for area division processing, so as to extract the main body area.
可选的,主体确定模块640,包括:Optionally, the subject determination module 640 includes:
获取单元641,配置成分别获取触摸测光区域在预览图像中对应的第一坐标位置和主体区域在预览图像中对应的第二坐标位置;The acquiring unit 641 is configured to respectively acquire the first coordinate position corresponding to the touch metering area in the preview image and the second coordinate position corresponding to the main body area in the preview image;
第一确定单元642,配置成确定第一坐标位置与第二坐标位置相匹配位置对应的区域;The first determining unit 642 is configured to determine an area corresponding to a position where the first coordinate position matches the second coordinate position;
第二确定单元643,配置成将匹配位置对应的区域确定为主体对象。The second determining unit 643 is configured to determine the area corresponding to the matching position as the subject object.
可选的,测光模块650,包括:Optionally, the light metering module 650 includes:
建立单元651,配置成对主体对象所在的主体区域进行分割处理,以触摸测光区域为中心建立高斯分布式权重表;The establishment unit 651 is configured to perform segmentation processing on the subject area where the subject object is located, and establish a Gaussian distributed weight table centered on the touch metering area;
第三确定单元652,配置成基于高斯分布式权重表,确定预览图像的亮度信息。The third determining unit 652 is configured to determine brightness information of the preview image based on the Gaussian distributed weight table.
可选的,上述建立单元651,具体配置成:Optionally, the above-mentioned establishing unit 651 is specifically configured as:
确定触摸测光区域的关联测光区域,关联测光区域为主体区域 中除触摸测光区域外的其他区域;Determine the associated photometric area of the touch photometric area, and the associated photometric area is other areas in the main body area except the touch photometric area;
以触摸测光区域为中心,分别为触摸测光区域和关联测光区域分配测光权重值,以建立高斯分布式权重表。Taking the touch photometry area as the center, assign photometry weight values to the touch photometry area and the associated photometry area respectively, so as to establish a Gaussian distributed weight table.
可选的,上述建立单元651,还配置成:Optionally, the above-mentioned establishment unit 651 is further configured to:
为触摸测光区域分配最高测光权重值;Assign the highest metering weight value to the touch metering area;
以触摸测光区域为中心,按照关联测光区域与触摸测光区域的距离由近至远的规则,依次为关联测光区域由高至低分配对应的测光权重值。Taking the touch metering area as the center, according to the rule that the distance between the associated metering area and the touch metering area is from shortest to farthest, assign corresponding metering weight values to the associated photometering areas from high to low.
可选的,上述测光模块650,具体配置成:Optionally, the above photometry module 650 is specifically configured as:
基于亮度信息,采用预设的曝光控制算法计算曝光时间和曝光增益;Based on the brightness information, the preset exposure control algorithm is used to calculate the exposure time and exposure gain;
当检测到快门触发操作时,基于曝光时间和曝光增益进行曝光。When a shutter trigger operation is detected, exposure is performed based on the exposure time and exposure gain.
本公开实施例提供的自动曝光装置,通过获取模块获取预览图像,并通过区域提取模块将预览图像输入预先训练好的神经网络模型中,提取主体区域,然后当检测到屏幕触摸操作时,通过区域确定模块确定预览图像的触摸测光区域,并通过主体确定模块基于触摸测光区域和主体区域,确定主体对象,进而通过测光模块采用动态权重法对主体对象进行测光,得到预览图像的亮度信息,以根据亮度信息进行曝光。该方法能够通过神经网络模型精确地提取到主体区域,并结合触摸测光区域确定出主体对象,提高了主体对象确定的准确性,以及通过采用动态权重法对主体对象进行测光统计,从而保证了在主体与背景明暗差异较大的场景中对主体对象进行合适的曝光,进而避免产生拍摄画面过曝或欠曝的情况,提高了照片拍摄的清晰度。The automatic exposure device provided by the embodiment of the present disclosure obtains the preview image through the acquisition module, and inputs the preview image into the pre-trained neural network model through the area extraction module to extract the subject area, and then when the screen touch operation is detected, through the area The determination module determines the touch metering area of the preview image, and determines the subject object through the subject determination module based on the touch metering area and the subject area, and then uses the dynamic weight method to measure the subject object through the photometry module to obtain the brightness of the preview image information to make exposure based on brightness information. This method can accurately extract the subject area through the neural network model, and determine the subject object in combination with the touch metering area, which improves the accuracy of determining the subject object. In order to properly expose the main object in a scene with a large difference between the light and shade of the main body and the background, thereby avoiding the situation of over-exposure or under-exposure of the shooting picture, and improving the clarity of photo shooting.
关于自动曝光装置的具体限定可以参见上文中对于终端老化测试方法的限定,在此不再赘述。上述终端老化测试装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的一个或多个处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于一个或多个处理器调用执行以上各个模块对应的操作。For specific limitations on the automatic exposure device, please refer to the above-mentioned limitations on the terminal aging test method, which will not be repeated here. Each module in the above-mentioned terminal aging test device can be fully or partially realized by software, hardware and a combination thereof. The above-mentioned modules can be embedded in or independent of one or more processors in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that one or more processors can call and execute the above The operation corresponding to the module.
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图9所示。该计算机设备包括通过系统总线连接的一个或多个处理器、存储器、通信接口、显示屏和输入装置。其中,该计算机设备的一个或多个处理器配置成提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的通信接口配置成与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、运营商网络、近场通信(NFC)或其他技术实现。该计算机可读指令被一个或多个处理器执行时以实现一种终端老化测试方法。该计算机设备的显示屏可 以是液晶显示屏或者电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。In one embodiment, a computer device is provided. The computer device may be a terminal, and its internal structure may be as shown in FIG. 9 . The computer device includes one or more processors, memory, communication interface, display screen, and input device connected by a system bus. Among other things, the one or more processors of the computer device are configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and computer readable instructions. The internal memory provides an environment for the execution of the operating system and computer readable instructions in the non-volatile storage medium. The communication interface of the computer device is configured to communicate with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, an operator network, near field communication (NFC) or other technologies. When the computer-readable instructions are executed by one or more processors, a terminal aging test method is realized. The display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the casing of the computer device , and can also be an external keyboard, touchpad, or mouse.
本领域技术人员可以理解,图9中示出的结构,仅仅是与本公开方案相关的部分结构的框图,并不构成对本公开方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the structure shown in FIG. 9 is only a block diagram of a partial structure related to the disclosed solution, and does not constitute a limitation on the computer equipment to which the disclosed solution is applied. The specific computer equipment can be More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.
在一个实施例中,本公开提供的自动曝光装置可以实现为一种计算机可读指令的形式,计算机可读指令可在如图9所示的计算机设备上运行。计算机设备的存储器中可存储组成该终端老化测试装置的各个程序模块,比如,图9所示的获取模块、区域提取模块、区域确定模块、主体确定模块和测光模块。各个程序模块构成的计算机可读指令使得一个或多个处理器执行本说明书中描述的本公开各个实施例的自动曝光方法中的步骤。In one embodiment, the automatic exposure device provided by the present disclosure can be implemented in the form of computer readable instructions, and the computer readable instructions can be run on the computer device as shown in FIG. 9 . Various program modules constituting the terminal aging test device can be stored in the memory of the computer equipment, for example, the acquisition module, region extraction module, region determination module, subject determination module and light metering module shown in FIG. 9 . The computer-readable instructions constituted by each program module cause one or more processors to execute the steps in the automatic exposure method of each embodiment of the present disclosure described in this specification.
例如,图8所示的计算机设备可以通过如图6所示的自动曝光装置中的获取模块执行步骤:获取预览图像。计算机设备可通过区域提取模块执行步骤:将所述预览图像输入预先训练好的神经网络模型中,提取主体区域。计算机设备可通过区域确定模块执行步骤:当检测到屏幕触摸操作时,确定所述预览图像的触摸测光区域。计算机设备可通过主体确定模块执行步骤:基于所述触摸测光区域和所述主体区域,确定主体对象。计算机设备可通过测光模块执行步骤:采用动态权重法对所述主体对象进行测光,得到所述预览图像的亮度信息,以根据所述亮度信息进行曝光。For example, the computer equipment shown in FIG. 8 may execute the step of: acquiring a preview image through the acquiring module in the automatic exposure device as shown in FIG. 6 . The computer equipment can perform the step of: inputting the preview image into a pre-trained neural network model to extract the main body area through the area extraction module. The computer device may use the area determination module to perform the step of: determining the touch metering area of the preview image when a screen touch operation is detected. The computer device may perform the step of determining a subject object based on the touch metering area and the subject area through the subject determination module. The computer device may perform the step of: using a dynamic weighting method to measure the subject object through the light metering module to obtain brightness information of the preview image, so as to perform exposure according to the brightness information.
在一个实施例中,提供了一种计算机设备,包括存储器和一个或多个处理器,该存储器存储有计算机可读指令,该一个或多个处理器执行计算机可读指令时实现以下步骤:In one embodiment, a computer device is provided, including a memory and one or more processors, the memory stores computer-readable instructions, and the one or more processors execute the computer-readable instructions to implement the following steps:
获取预览图像;将所述预览图像输入预先训练好的神经网络模型中,提取主体区域,所述神经网络模型配置成对所述预览图像进行边缘检测和区域划分;当检测到屏幕触摸操作时,确定所述预览图像的触摸测光区域;基于所述触摸测光区域和所述主体区域,确定主体对象;采用动态权重法对所述主体对象进行测光,得到所述预览图像的亮度信息,以根据所述亮度信息进行曝光。Acquiring a preview image; inputting the preview image into a pre-trained neural network model, extracting the main body area, and the neural network model is configured to perform edge detection and area division on the preview image; when a screen touch operation is detected, determining the touch metering area of the preview image; determining a subject object based on the touch metering area and the subject area; performing photometry on the subject object using a dynamic weighting method to obtain brightness information of the preview image, to perform exposure according to the brightness information.
在一个实施例中,提供了一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,其上存储有计算机可读指令,计算机可读指令被一个或多个处理器执行时实现以下步骤:In one embodiment, there is provided one or more non-transitory computer-readable storage media having computer-readable instructions stored thereon, the computer-readable instructions being executed by one or more processors When performing the following steps:
获取预览图像;将所述预览图像输入预先训练好的神经网络模型中,提取主体区域,所述神经网络模型配置成对所述预览图像进行边缘检测和区域划分;当检测到屏幕触摸操作时,确定所述预览图像的触摸测光区域;基于所述触摸测光区域和所述主体区域,确定主体对象;采用动态权重法对所述主体对象进行测光,得到所述预览图像的亮度信息,以根据所述亮度信息进行曝光。Acquiring a preview image; inputting the preview image into a pre-trained neural network model, extracting the main body area, and the neural network model is configured to perform edge detection and area division on the preview image; when a screen touch operation is detected, determining the touch metering area of the preview image; determining a subject object based on the touch metering area and the subject area; performing photometry on the subject object using a dynamic weighting method to obtain brightness information of the preview image, to perform exposure according to the brightness information.
上述计算机可读存储介质中,计算机可读指令通过通过获取预览图像,并将预览图像输入预先训练好的神经网络模型中,提取主体区域,该神经网络模型配置成对预览图像进行边缘检测和区域划分,当检测到屏幕触摸操作时,确定预览图像的触摸测光区域,基于触摸测光区域和主体区域,确定主体对象,采用动态权重法对主体对象进行测光,得到预览图像的亮度信息,以根据亮度信息进行曝光。该方法能够通过神经网络模型精确地提取到主体区域,并结合触摸测光区域确定出主体对象,提高了主体对象确定的准确性,以及通过采用动态权重法对主体对象进行测光统计,从而保证了在主体与背景明暗差异较大的场景中对主体对象进行合适的曝光,进而避免产生拍摄画面过曝或欠曝的情况,提高了照片拍摄的清晰度。In the above-mentioned computer-readable storage medium, the computer-readable instructions obtain the preview image and input the preview image into a pre-trained neural network model to extract the main body area, and the neural network model is configured to perform edge detection and region detection on the preview image. Divide, when a screen touch operation is detected, determine the touch metering area of the preview image, determine the main object based on the touch metering area and the main body area, use the dynamic weight method to measure the main object, and obtain the brightness information of the preview image, to perform exposure based on brightness information. This method can accurately extract the subject area through the neural network model, and determine the subject object in combination with the touch metering area, which improves the accuracy of determining the subject object. In order to properly expose the main object in a scene with a large difference between the light and shade of the main body and the background, thereby avoiding the situation of over-exposure or under-exposure of the shooting picture, and improving the clarity of photo shooting.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本公开所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,比如静态随机存取存储器(Static Random Access Memory,SRAM)和动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be completed by instructing related hardware through computer-readable instructions, and the computer-readable instructions can be stored in a non-volatile computer In the readable storage medium, the computer-readable instructions may include the processes of the embodiments of the above-mentioned methods when executed. Wherein, any reference to storage, database or other media used in the various embodiments provided by the present disclosure may include at least one of non-volatile and volatile storage. Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration and not limitation, RAM is available in various forms such as Static Random Access Memory (SRAM) and Dynamic Random Access Memory (DRAM), among others.
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, they should be It is considered to be within the range described in this specification.
以上所述实施例仅表达了本公开的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干变形和改进,这些都属于本公开的保护范围。因此,本公开专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation modes of the present disclosure, and the descriptions thereof are relatively specific and detailed, but should not be construed as limiting the scope of the patent for the invention. It should be noted that those skilled in the art can make several modifications and improvements without departing from the concept of the present disclosure, and these all belong to the protection scope of the present disclosure. Therefore, the scope of protection of the disclosed patent should be based on the appended claims.
本公开提供的自动曝光方法,能够通过神经网络模型精确地提取到主体区域,并结合触摸测光区域确定出主体对象,提高了主体对象确定的准确性,以及通过采用动态权重法对主体对象进行测光统计,可保证在主体与背景明暗差异较大的场景中对主体对象进行合适的曝光,从而避免产生拍摄画面过曝或欠曝的情况,提高照片拍摄的清晰度,具有很强的工业实用性。The automatic exposure method provided by the present disclosure can accurately extract the subject area through the neural network model, and determine the subject object in combination with the touch metering area, which improves the accuracy of subject object determination, and uses the dynamic weight method to determine the subject object. Metering statistics can ensure proper exposure of the subject in scenes with large differences between the subject and the background, so as to avoid overexposure or underexposure of the shooting picture and improve the clarity of photo shooting. It has a strong industrial practicality.
Claims (15)
- 一种自动曝光方法,其特征在于,所述方法包括:An automatic exposure method, characterized in that the method comprises:获取预览图像;Get the preview image;将所述预览图像输入预先训练好的神经网络模型中,提取主体区域,所述神经网络模型配置成对所述预览图像进行边缘检测和区域划分;Inputting the preview image into a pre-trained neural network model to extract the subject area, the neural network model is configured to perform edge detection and area division on the preview image;当检测到屏幕触摸操作时,确定所述预览图像的触摸测光区域;When a screen touch operation is detected, determine the touch metering area of the preview image;基于所述触摸测光区域和所述主体区域,确定主体对象;determining a subject object based on the touch metering area and the subject area;采用动态权重法对所述主体对象进行测光,得到所述预览图像的亮度信息,以根据所述亮度信息进行曝光。The dynamic weight method is used to measure the light of the main object, and the brightness information of the preview image is obtained, so as to perform exposure according to the brightness information.
- 根据权利要求1所述的方法,其中,将所述预览图像输入预先训练好的神经网络模型中,提取主体区域,包括:The method according to claim 1, wherein, inputting the preview image into a pre-trained neural network model to extract the main body region comprises:对所述预览图像按照亮度进行划分处理,提取候选区域;performing division processing on the preview image according to brightness, and extracting candidate regions;将所述候选区域输入预先训练好的神经网络模型中进行区域划分处理,以提取主体区域。The candidate area is input into a pre-trained neural network model for area division processing, so as to extract the subject area.
- 根据权利要求2所述的方法,其中,基于所述触摸测光区域和所述主体区域,确定主体对象,包括:The method according to claim 2, wherein determining a subject object based on the touch metering area and the subject area comprises:分别获取所述触摸测光区域在所述预览图像中对应的第一坐标位置和所述主体区域在所述预览图像中对应的第二坐标位置;Respectively acquiring a first coordinate position corresponding to the touch metering area in the preview image and a second coordinate position corresponding to the subject area in the preview image;确定所述第一坐标位置与所述第二坐标位置相匹配位置对应的区域;determining the area corresponding to the matching position between the first coordinate position and the second coordinate position;将所述匹配位置对应的区域确定为主体对象。The area corresponding to the matching position is determined as the subject object.
- 根据权利要求3所述的方法,其中,采用动态权重法对所述主体对象进行测光,得到所述预览图像的亮度信息,包括:The method according to claim 3, wherein the photometry of the subject object is performed by using a dynamic weight method to obtain the brightness information of the preview image, comprising:对所述主体对象所在的主体区域进行分割处理,以所述触摸测光区域为中心建立高斯分布式权重表;Segmenting the main body area where the main body object is located, and establishing a Gaussian distributed weight table centered on the touch metering area;基于所述高斯分布式权重表,确定所述预览图像的亮度信息。Determine brightness information of the preview image based on the Gaussian distributed weight table.
- 根据权利要求4所述的方法,其中,对所述主体区域进行分割处理,以所述触摸测光区域为中心建立高斯分布式权重表,包括:The method according to claim 4, wherein performing segmentation processing on the subject area, and establishing a Gaussian distributed weight table centered on the touch metering area, comprising:确定所述触摸测光区域的关联测光区域,所述关联测光区域为所述主体区域中除所述触摸测光区域外的其他区域;determining an associated photometric area of the touch photometric area, where the associated photometric area is an area other than the touch photometric area in the subject area;以所述触摸测光区域为中心,分别为所述触摸测光区域和所述关联测光区域分配测光权重值,以建立所述高斯分布式权重表。Taking the touch photometry area as the center, assign photometry weight values to the touch photometry area and the associated photometry area respectively, so as to establish the Gaussian distributed weight table.
- 根据权利要求5所述的方法,其中,分别为所述触摸测光区域和所述关联测光区域分配测光权重值,包括:The method according to claim 5, wherein assigning a photometry weight value to the touch photometry area and the associated photometry area respectively comprises:为所述触摸测光区域分配最高测光权重值;Assigning the highest photometric weight value to the touch metering area;以所述触摸测光区域为中心,按照所述关联测光区域与所述触摸测光区域的距离由近至远的规则,依次为所述关联测光区域由高至低分配对应的测光权重值。Taking the touch metering area as the center, according to the rule that the distance between the associated photometering area and the touch metering area is from near to far, sequentially assign corresponding metering values to the associated photometering area from high to low Weights.
- 根据权利要求6所述的方法,其中,根据所述亮度信息进行曝光,包括:The method according to claim 6, wherein exposing according to the brightness information comprises:基于所述亮度信息,采用预设的曝光控制算法计算曝光时间和曝光增益;Based on the brightness information, using a preset exposure control algorithm to calculate exposure time and exposure gain;当检测到快门触发操作时,基于所述曝光时间和所述曝光增益进行曝光。When a shutter trigger operation is detected, exposure is performed based on the exposure time and the exposure gain.
- 根据权利要求2所述的方法,其中,所述对所述预览图像按照亮度进行划分处理,提取候选区域,包括:The method according to claim 2, wherein said dividing the preview image according to brightness and extracting candidate regions comprises:获取所述预览图像中每个像素点的亮度值;Obtain the brightness value of each pixel in the preview image;根据所述每个像素点的亮度值对所述预览图像进行亮度划分处理,提取所述候选区域。performing brightness division processing on the preview image according to the brightness value of each pixel, and extracting the candidate region.
- 根据权利要求2所述的方法,其中,所述将所述候选区域输入预先训练好的神经网络模型中进行区域划分处理,以提取主体区域,包括:The method according to claim 2, wherein said inputting said candidate regions into a pre-trained neural network model for region division processing to extract subject regions comprises:将所述候选区域输入所述神经网络模型进行预处理,得到预处理后的候选区域;Inputting the candidate region into the neural network model for preprocessing to obtain a preprocessed candidate region;所述预处理后的候选区域依次经过卷积层、池化层和全连接层进行区域划分处理,确定所述主体区域。The preprocessed candidate regions are sequentially processed by convolutional layer, pooling layer and fully connected layer for region division to determine the subject region.
- 一种自动曝光装置,其特征在于,该装置包括:An automatic exposure device, characterized in that the device comprises:获取模块,配置成获取预览图像;an acquisition module configured to acquire a preview image;区域提取模块,配置成将所述预览图像输入预先训练好的神经网络模型中,提取主体区域,所述神经网络模型配置成对所述预览图像进行边缘检测和区域划分;The area extraction module is configured to input the preview image into a pre-trained neural network model to extract the subject area, and the neural network model is configured to perform edge detection and area division on the preview image;区域确定模块,配置成当检测到屏幕触摸操作时,确定所述预览图像的触摸测光区域;An area determination module configured to determine the touch metering area of the preview image when a screen touch operation is detected;主体确定模块,配置成基于所述触摸测光区域和所述主体区域,确定主体对象;a subject determination module configured to determine a subject object based on the touch metering area and the subject area;测光模块,配置成采用动态权重法对所述主体对象进行测光,得到所述预览图像的亮度信息,以根据所述亮度信息进行曝光。The light metering module is configured to use a dynamic weighting method to measure light on the subject object to obtain brightness information of the preview image, so as to perform exposure according to the brightness information.
- 根据权利要求10所述的装置,其中,所述区域提取模块包括:第一提取单元和第二提取单元;The device according to claim 10, wherein the region extraction module comprises: a first extraction unit and a second extraction unit;所述第一提取单元,配置成对所述预览图像按照亮度进行划分 处理,提取候选区域;The first extraction unit is configured to divide and process the preview image according to brightness, and extract candidate regions;所述第二提取单元,配置成将所述候选区域输入预先训练好的神经网络模型中进行区域划分处理,以提取主体区域。The second extraction unit is configured to input the candidate area into a pre-trained neural network model for area division processing, so as to extract the subject area.
- 根据权利要求11所述的装置,其中,所述主体确定模块包括:获取单元、第一确定单元和第二确定单元;The device according to claim 11, wherein the subject determination module comprises: an acquisition unit, a first determination unit and a second determination unit;所述获取单元,配置成分别获取所述触摸测光区域在所述预览图像中对应的第一坐标位置和所述主体区域在所述预览图像中对应的第二坐标位置;The acquiring unit is configured to respectively acquire a first coordinate position corresponding to the touch metering area in the preview image and a second coordinate position corresponding to the main body area in the preview image;所述第一确定单元,配置成确定第一坐标位置与第二坐标位置相匹配位置对应的区域;The first determining unit is configured to determine an area corresponding to a position where the first coordinate position matches the second coordinate position;所述第二确定单元,配置成将匹配位置对应的区域确定为主体对象。The second determining unit is configured to determine the area corresponding to the matching position as the subject object.
- 根据权利要求12所述的装置,其中,所述测光模块包括:建立单元和第三确定单元;The device according to claim 12, wherein the photometry module comprises: an establishing unit and a third determining unit;所述建立单元,配置成对主体对象所在的主体区域进行分割处理,以触摸测光区域为中心建立高斯分布式权重表;The establishment unit is configured to segment the subject area where the subject object is located, and establish a Gaussian distributed weight table centered on the touch metering area;所述第三确定单元,配置成基于所述高斯分布式权重表,确定预览图像的亮度信息。The third determination unit is configured to determine brightness information of the preview image based on the Gaussian distributed weight table.
- 一种计算机设备,包括存储器、一个或多个处理器及存储在存储器上并可在一个或多个处理器上运行的计算机可读指令,其特征在于,所述一个或多个处理器执行所述计算机可读指令时实现如权利要求1-9中任一项所述的自动曝光方法。A computer device comprising a memory, one or more processors, and computer readable instructions stored on the memory and executable on the one or more processors, wherein the one or more processors execute the The automatic exposure method according to any one of claims 1-9 is implemented when the computer-readable instructions are used.
- 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令配置成被一个或多个处理器执行时实现权利要求1-9中任一项所述的自动曝光方法。One or more non-transitory computer-readable storage media having stored thereon computer-readable instructions configured to implement the claims when executed by one or more processors The automatic exposure method described in any one of 1-9.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111447995.2A CN114257738B (en) | 2021-11-30 | 2021-11-30 | Automatic exposure method, device, equipment and storage medium |
CN202111447995.2 | 2021-11-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023098743A1 true WO2023098743A1 (en) | 2023-06-08 |
Family
ID=80793673
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/135546 WO2023098743A1 (en) | 2021-11-30 | 2022-11-30 | Automatic exposure method, apparatus and device, and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114257738B (en) |
WO (1) | WO2023098743A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118608592A (en) * | 2024-08-07 | 2024-09-06 | 武汉工程大学 | Line structure light center line extraction method based on light channel exposure self-adaption |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113507570B (en) * | 2021-07-26 | 2023-05-26 | 维沃移动通信有限公司 | Exposure compensation method and device and electronic equipment |
CN114257738B (en) * | 2021-11-30 | 2024-06-28 | 上海闻泰信息技术有限公司 | Automatic exposure method, device, equipment and storage medium |
CN117173141A (en) * | 2023-09-11 | 2023-12-05 | 山东博昂信息科技有限公司 | Smelting observation system based on flame image characteristics |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006053250A (en) * | 2004-08-10 | 2006-02-23 | Fuji Photo Film Co Ltd | Image processing apparatus and imaging apparatus |
CN104219518A (en) * | 2014-07-31 | 2014-12-17 | 小米科技有限责任公司 | Photometry method and device |
CN105227857A (en) * | 2015-10-08 | 2016-01-06 | 广东欧珀移动通信有限公司 | A kind of method and apparatus of automatic exposure |
CN110163076A (en) * | 2019-03-05 | 2019-08-23 | 腾讯科技(深圳)有限公司 | A kind of image processing method and relevant apparatus |
CN110493527A (en) * | 2019-09-24 | 2019-11-22 | Oppo广东移动通信有限公司 | Main body focusing method, device, electronic equipment and storage medium |
CN114257738A (en) * | 2021-11-30 | 2022-03-29 | 上海闻泰信息技术有限公司 | Automatic exposure method, device, equipment and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101789125B (en) * | 2010-01-26 | 2013-10-30 | 北京航空航天大学 | Method for tracking human skeleton motion in unmarked monocular video |
CN103034997B (en) * | 2012-11-30 | 2017-04-19 | 北京博创天盛科技有限公司 | Foreground detection method for separation of foreground and background of surveillance video |
JP7092616B2 (en) * | 2018-08-24 | 2022-06-28 | セコム株式会社 | Object detection device, object detection method, and object detection program |
CN113657137A (en) * | 2020-05-12 | 2021-11-16 | 阿里巴巴集团控股有限公司 | Data processing method and device, electronic equipment and storage medium |
-
2021
- 2021-11-30 CN CN202111447995.2A patent/CN114257738B/en active Active
-
2022
- 2022-11-30 WO PCT/CN2022/135546 patent/WO2023098743A1/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006053250A (en) * | 2004-08-10 | 2006-02-23 | Fuji Photo Film Co Ltd | Image processing apparatus and imaging apparatus |
CN104219518A (en) * | 2014-07-31 | 2014-12-17 | 小米科技有限责任公司 | Photometry method and device |
CN105227857A (en) * | 2015-10-08 | 2016-01-06 | 广东欧珀移动通信有限公司 | A kind of method and apparatus of automatic exposure |
CN110163076A (en) * | 2019-03-05 | 2019-08-23 | 腾讯科技(深圳)有限公司 | A kind of image processing method and relevant apparatus |
CN110493527A (en) * | 2019-09-24 | 2019-11-22 | Oppo广东移动通信有限公司 | Main body focusing method, device, electronic equipment and storage medium |
CN114257738A (en) * | 2021-11-30 | 2022-03-29 | 上海闻泰信息技术有限公司 | Automatic exposure method, device, equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118608592A (en) * | 2024-08-07 | 2024-09-06 | 武汉工程大学 | Line structure light center line extraction method based on light channel exposure self-adaption |
Also Published As
Publication number | Publication date |
---|---|
CN114257738A (en) | 2022-03-29 |
CN114257738B (en) | 2024-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023098743A1 (en) | Automatic exposure method, apparatus and device, and storage medium | |
CN108495050B (en) | Photographing method, photographing device, terminal and computer-readable storage medium | |
JP7371081B2 (en) | Night view photography methods, devices, electronic devices and storage media | |
WO2019183813A1 (en) | Image capture method and device | |
CN108933899B (en) | Panorama shooting method, device, terminal and computer readable storage medium | |
WO2019233341A1 (en) | Image processing method and apparatus, computer readable storage medium, and computer device | |
WO2020038074A1 (en) | Exposure control method and apparatus, and electronic device | |
WO2020034737A1 (en) | Imaging control method, apparatus, electronic device, and computer-readable storage medium | |
US10785403B2 (en) | Modifying image parameters using wearable device input | |
CN108777767A (en) | Photographic method, device, terminal and computer readable storage medium | |
CN111028189A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN107820020A (en) | Method of adjustment, device, storage medium and the mobile terminal of acquisition parameters | |
CN113518210B (en) | Method and device for automatic white balance of image | |
CN107592473A (en) | Exposure parameter method of adjustment, device, electronic equipment and readable storage medium storing program for executing | |
WO2021204202A1 (en) | Image auto white balance method and apparatus | |
WO2019029573A1 (en) | Image blurring method, computer-readable storage medium and computer device | |
CN105391940B (en) | A kind of image recommendation method and device | |
CN106791451B (en) | Photographing method of intelligent terminal | |
CN109068060B (en) | Image processing method and device, terminal device and computer readable storage medium | |
WO2020034739A1 (en) | Control method and apparatus, electronic device, and computer readable storage medium | |
CN112258380A (en) | Image processing method, device, equipment and storage medium | |
TW201110684A (en) | Method and device for adjusting weighting values in light metering | |
CN113177886B (en) | Image processing method, device, computer equipment and readable storage medium | |
CN113438411A (en) | Image shooting method, image shooting device, computer equipment and computer readable storage medium | |
US20230164446A1 (en) | Imaging exposure control method and apparatus, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22900547 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |