CN111050081B - Shooting method and electronic equipment - Google Patents
Shooting method and electronic equipment Download PDFInfo
- Publication number
- CN111050081B CN111050081B CN201911378833.0A CN201911378833A CN111050081B CN 111050081 B CN111050081 B CN 111050081B CN 201911378833 A CN201911378833 A CN 201911378833A CN 111050081 B CN111050081 B CN 111050081B
- Authority
- CN
- China
- Prior art keywords
- shooting
- image
- attribute
- layer
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention discloses a shooting method and electronic equipment. The shooting method comprises the following steps: determining a first attribute of raindrops in a target scene; determining a first shooting parameter according to the first attribute; and shooting the target scene according to the first shooting parameters to obtain a target image. The embodiment of the invention can solve the problems of complex shooting process and poor shooting effect of the rain scene image.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a shooting method and electronic equipment.
Background
Currently, the photographing effect has become a key factor for measuring the performance of the electronic equipment and making purchase selection under various conditions.
However, when the conventional electronic device shoots a rain scene, if shooting is performed by directly using the original shooting mode of the electronic device, for example, a slow motion mode or a close-up mode, only a fixed shooting effect can be obtained, and the shooting effect cannot be adjusted according to the actual situation of the rain scene, so that the shooting effect is poor. If the user wants to shoot according to the actual situation of the rain scene, the user needs to adjust the shooting parameters through multiple tests, so that the shooting parameters can be obtained to shoot, and the shooting process is complex.
Disclosure of Invention
The embodiment of the invention provides a shooting method and electronic equipment, and aims to solve the problems of complex shooting process and poor shooting effect of a rain scene image.
In a first aspect, an embodiment of the present invention provides a shooting method applied to an electronic device, where the method includes:
determining a first attribute of raindrops in a target scene;
determining a first shooting parameter according to the first attribute;
and shooting the target scene according to the first shooting parameters to obtain a target image.
In a second aspect, an embodiment of the present invention further provides an electronic device, including:
the first determining module is used for determining a first attribute of raindrops in a target scene;
the second determining module is used for determining the first shooting parameter according to the first attribute;
and the shooting module is used for shooting the target scene according to the first shooting parameters to obtain a target image.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the shooting method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the shooting method according to the first aspect.
In the embodiment of the invention, when raindrops are contained in the target scene, the first shooting parameters are determined according to the first attributes of the raindrops in the target scene, and then shooting is carried out according to the first shooting parameters to obtain the target image. In the embodiment of the invention, the shooting parameters can be directly determined according to the actual condition of the current rain scene without manually setting and adjusting the parameters by a user, so that the operation process of rain scene shooting is simplified, and the finally obtained target image can have different effects according to the actual condition of the rain scene because the first attributes of raindrops are different and correspond to different first shooting parameters, so that the shooting effect of the rain scene is good.
Drawings
The present invention will be better understood from the following description of specific embodiments thereof taken in conjunction with the accompanying drawings, in which like or similar reference characters designate like or similar features.
Fig. 1 is a schematic flow chart of a shooting method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a bridge control according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of another shooting method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of another photographing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the process of shooting a rain scene, if slow motion and close-up are used, although raindrops can be shot clearly, the imaging of a macro object at a medium and long distance can be blurred, more problems exist if normal shutter speed is adopted for conventional focusing shooting, and a better shooting effect is difficult to achieve. Therefore, in order to obtain a desired photographing effect, a user needs to manually set and adjust photographing parameters, and the photographing process is complicated.
Moreover, most areas in China are in the monsoon climate zone, and have high-frequency precipitation all the year round, under such conditions, the requirement for shooting scenes in the rain is to correspond to pain spots shot by electronic equipment in the scene, and therefore, a method for shooting a rain scene image with higher quality in the rain is needed.
In order to solve the above technical problem, an embodiment of the present invention provides a shooting method applied to an electronic device, and referring to fig. 1, fig. 1 shows a flowchart of the shooting method provided by the embodiment of the present invention. The method comprises the following steps:
s101, determining a first attribute of raindrops in a target scene;
since there are many types of rain-scene images, such as raindrops, rain fog, etc., different shooting parameters need to be set for different types of rain-scenes to obtain clear rain-scene shot images. And the first attribute is an attribute capable of reflecting the size of the rain condition, for example, the first attribute may include at least one of the speed and the size of the raindrop, or may further include the shape of the raindrop, and the like, so as to enrich the display effect of the imaged rain scene image.
S102, determining a first shooting parameter according to the first attribute;
here, the first photographing parameter may include a shutter speed. This is because different shutter speeds correspond to different raindrop shooting effects, and if a clear raindrop image is to be shot, the image can be shot only at a large shutter speed, for example, the shutter speed in the case of normal shooting is 1/60 seconds, the shutter speed in the first shooting parameter may be 1/1000 seconds, and of course, the shutter speed here is also related to the raindrop image effect that the user wants to obtain, for example, 1/1000 seconds shooting a clear water droplet, 1/200 seconds shooting a short raindrop, 1/60 seconds missing rainstrip, 1/20 seconds and rainstrip, etc.
S103, shooting the target scene according to the first shooting parameters to obtain a target image.
In the embodiment of the invention, when raindrops are contained in the target scene, the first shooting parameters are determined according to the first attributes of the raindrops in the target scene, and then shooting is carried out according to the first shooting parameters to obtain the target image. In the embodiment of the invention, the shooting parameters can be directly determined according to the actual condition of the current rain scene without manually setting and adjusting the parameters by a user, so that the operation process of rain scene shooting is simplified, and the finally obtained target image can have different effects according to the actual condition of the rain scene because the first attributes of raindrops are different and correspond to different first shooting parameters, so that the shooting effect of the rain scene is good.
In some embodiments of the present invention, before S101, the method may further include:
and identifying whether the target scene is a rain scene. The specific identification mode may be: whether the preview image in the shooting interface is a rain scene image or not is identified.
In still other embodiments of the present invention, a corresponding relationship between the first attribute and the first shooting parameter may be preset in the electronic device, and after the first attribute is determined, the first shooting parameter corresponding to the first attribute is directly obtained. In addition, in other embodiments, the first shooting parameter corresponding to the first attribute may also be determined according to an input of a user.
Before determining the target shooting mode, the shooting effect under the common shooting parameters is usually displayed in the shooting interface, so that a user can know that the electronic device has a rain scene imaging function through the shooting interface and can select the target shooting mode currently, the invention provides a further embodiment, which can prompt the user, and before S101, the method further comprises:
and under the condition that the target scene is identified as a rain scene, displaying prompt information of entering rain scene shooting on a shooting interface.
Because the embodiment also displays the prompt message of entering the rain-scene shooting when recognizing that the framing is the rain-scene, namely, the user is informed that the user enters the rain-scene shooting process at present, the user can start to perform subsequent operation, and the use of the rain-scene imaging function by the user who cannot know the function of the electronic equipment is facilitated.
In order to prompt the user, the user can know that the rainscene shooting is currently performed, the prompt information may be text information, or the rainscene image displayed in the shooting interface is changed, so that the user can visually see the difference between the rainscene shooting effect and the common shooting effect before shooting.
In a specific embodiment, the displaying, on the shooting interface, a prompt that the shooting of the rain scenery is entered may include:
identifying rain condition information in a target scene;
acquiring initial shooting parameters corresponding to the rain condition information;
and adjusting the display content in the shooting interface according to the initial shooting parameters, and taking the adjusted display content as prompt information.
The rain condition information refers to information such as light rain, medium rain, heavy rain and the like, the display content in the shooting interface is adjusted according to the initial shooting parameters, so that a user can visually see the shooting effect after the shooting mode is changed into a rain scene shooting mode in the shooting interface, and the rain scene part is highlighted due to the change of the shooting effect, so that the user can directly know that the user enters the rain scene shooting mode at present, and the effect of prompting the user is achieved.
In some embodiments of the present invention, the S102 may include:
receiving a first input of a user;
determining a second attribute of the raindrop in response to the first input; the second attribute is the attribute of raindrops in the image; wherein the second attribute here may include a vertical display length of the raindrop;
and determining a first shooting parameter according to the first attribute and the second attribute.
And under the condition that the user does not input the first input, determining the first shooting parameter directly according to the first attribute.
The present embodiment determines a second attribute of the raindrop according to the input of the user, where the second attribute is used to reflect the raindrop photographing effect desired by the user. The first shooting parameters are determined according to the actual attributes of the raindrops and the expected attributes of the users to the raindrops, so that the final shooting effect can be guaranteed to meet the requirements of the users as much as possible, and the user experience is improved.
Based on the above embodiments, in some embodiments of the present invention, a bridge control may be displayed on a shooting interface of an electronic device, referring to fig. 2, where fig. 2 illustrates a schematic diagram of a bridge control provided in an embodiment of the present invention. The bridge control can comprise a first sliding shaft and a second sliding shaft, wherein a first mark is arranged on the first sliding shaft, and a second mark is arranged on the second sliding shaft. And the input of the first identifier and the second identifier moved by the user is the first input.
Wherein the second attribute of the raindrop is changed during the movement of the first mark or the second mark. In addition, in order to facilitate the user to know the actual shooting effect, the preview effect of the raindrops under the current second attribute can be displayed in the shooting interface in the process that the user moves the first identifier or the second identifier. For example, when a user moves a first marker to the left (or to the right), raindrops change from clear water drops to elongated rain, and when a user moves a second marker to the upper (or to the lower), the raindrops change to rain fog in cooperation with the wide-angle lens. It should be noted that the positions of the first identifier and the second identifier correspond to different shooting effects, rather than values of a single parameter, and in the process of moving the first identifier and the second identifier, each parameter included in the first shooting parameter changes, that is, the positions of the first identifier and the second identifier in the bridge control correspond to different first shooting parameters, respectively. The bridge control can reduce occupation of positions in the shooting interface.
In other embodiments of the present invention, the receiving the first input by the user may further include: and receiving selection input of a user to a target shooting mode in a preset shooting mode list, wherein the preset shooting mode list comprises a plurality of shooting modes, and each shooting mode corresponds to different second attributes respectively.
That is, in this embodiment, the user can see the preset shooting mode list, thereby intuitively selecting the corresponding shooting mode. The mode enables the user to clearly know the selection range of the user and the selection is more intuitive.
In the existing rain scene shooting process, when rain is heavy, a large number of rain drops moving at high speed can generate serious blur in the main part of a picture, and then the rain drops are generally dark in the sky, so that exposure needs to be improved for shooting a middle-distance and long-distance macroscopic scene, and the ground part with accumulated water is too bright. In summary, in the rain scene shooting, there are raindrops, which are noise-like microscopic scenes, and meanwhile, there are multiple shooting targets such as surface water, medium-distance and long-distance macroscopic scenes, and the traditional High-Dynamic Range (HDR) and 3A (AF/AE/AWB) algorithms cannot well cope with the shooting task in this scene. To solve this problem, in another embodiment of the present invention, as shown in fig. 3, fig. 3 is a schematic diagram of a bridge control provided by an embodiment of the present invention.
The method may further comprise:
s201, determining a first attribute of raindrops in a target scene;
s202, determining a first shooting parameter according to the first attribute; the first shooting parameters comprise a first sub-parameter, a second sub-parameter and a third sub-parameter; s201 to S202 are similar to S101 to S102 in fig. 1, and are not described herein again.
S203, shooting a target scene according to the first sub-parameter to obtain a first image, and performing image layer extraction on the first image to obtain a raindrop image layer;
since the first attribute of the raindrops is different, for example, the raindrop speed is different, and the shutter speed needs to be increased to obtain a clear raindrop layer, the first sub-parameter should have a larger shutter speed, and the specific magnitude of the shutter speed is related to the value of the first attribute.
S204, shooting the target scene according to the second sub-parameters to obtain a second image, and performing layer extraction on the second image to obtain a background layer;
the background layer is a layer including background images such as dark sky, ground and distant macro scenery. Since different first attributes correspond to different rain conditions, e.g. heavy rain, medium rain, light rain. Different rain conditions may have a certain influence on the brightness of the environment, for example, a dark day in case of heavy rain and a bright day in case of light rain, so that a clear background image is desired to be obtained, different exposures may be set according to different first attributes in the second sub-parameter, and the exposure in the second sub-parameter is also controlled by the HDR algorithm.
S205, shooting a target scene according to the third sub-parameter to obtain a third image, and performing image layer extraction on the third image to obtain a middle-distance object image layer; wherein the map layer of the medium-distance object comprises surface water accumulation information;
because the middle-distance object image layer comprises the ground ponding information, different degrees of ponding can be caused by different rain conditions, and therefore in order to obtain a clear middle-distance object image layer, the third sub-parameter can also be influenced by the first attribute.
In addition, since the background image part belongs to the distant view part, the intermediate distance object belongs to the intermediate and near view part, and the raindrops belong to the micro distance scene part. Therefore, in order to obtain a clear background layer and a middle distance object layer, the focus position in the second sub-parameter should be farther than the focus position in the third sub-parameter.
There is no restriction on the sequence between S203 and S205.
And S206, fusing the raindrop image layer, the background image layer and the middle distance object image layer to obtain a target image.
The layer extraction principle may be based on image local gray level mean and other layering principles, which are not limited in the present invention. And image layer fusion, namely combining and superposing the contents contained in the image layers.
In the embodiment of the invention, when the current view scene is a rain scene, shooting is respectively carried out according to different sub-parameters to obtain a first image, a second image and a third image. Because the content of the rain-scene image can be mainly divided into three parts, namely a raindrop part, a background part and a middle distance object part, through setting proper sub-parameters, the first image can be made to contain clear raindrop parts in the second image and the third image respectively, clear background parts and clear middle distance object parts, then the clear parts in the three parts are subjected to layer extraction, a raindrop layer containing clear objects, the background layer and a middle distance object layer are obtained, so that the rain-scene image obtained by fusion according to the three layers not only contains clear raindrops, but also can include clear backgrounds and middle distance objects, the whole rain-scene image is in a clear state, and the imaging effect of the rain-scene image is improved.
Optionally, in the second sub-parameter and the third sub-parameter, only whether a long-range view or a short-range view needs to be focused may be set, and in an actual shooting process, a specific focusing position is determined by an auto-focusing function of a 3A algorithm. In addition, other parameters which are not set in the first shooting parameters in the shooting process can also be automatically set through a 3A algorithm, and the background layer and the middle distance object layer can be automatically exposed to different degrees through HDR.
In a specific embodiment, the distance objects in the distance object layer refer to: an object whose length is up to a preset percentage of the length of the image frame or whose width is up to a preset percentage of the width of the image frame. The preset percentage may be 1/10. The background image included in the background image layer may be an image of a portion other than raindrops and medium-distance objects.
In other embodiments of the present invention, the above S203 to S205 may include:
shooting a target scene according to the first sub-parameters to obtain a first image, and extracting the layer of the first image through a pre-trained first semantic segmentation model to obtain a raindrop layer;
shooting a target scene according to the second sub-parameters to obtain a second image, and extracting a layer of the second image through a pre-trained second semantic segmentation model to obtain a background layer;
and shooting the target scene according to the third sub-parameters to obtain a third image, and extracting the image layer of the third image through a pre-trained third semantic segmentation model to obtain a middle-distance object image layer.
Wherein the first image, the second image and the third image each comprise 2 frames of images.
The image layer extraction is carried out through the pre-trained semantic segmentation model, the accuracy of the extracted image layer can be improved, and under the condition, the number of the required image frames as input can be properly reduced, so that the extraction of a single image layer can be completed only by 2 frames, compared with the condition that 3 frames are required at present, the number of the required image frames as input is small, and the shooting performance experience is effectively improved.
In an embodiment of the present invention, the training process of the first semantic segmentation model may be: and taking the pre-acquired shot image and the corresponding raindrop image layer as samples, and training a deep learning model to learn the mapping relation between the shot image and the raindrop image layer so as to obtain a trained first semantic segmentation model. Similarly, the training process of the second semantic segmentation model may be: and taking the pre-acquired shot image and the corresponding background image layer as samples, and training a deep learning model to learn the mapping relation between the shot image and the background image layer so as to obtain a trained second semantic segmentation module. The training process of the third semantic segmentation model may be: and taking the pre-acquired shot image and the corresponding middle-distance object layer as samples, and training a deep learning model to learn the mapping relation between the shot image and the middle-distance object layer so as to obtain a trained third semantic segmentation module.
In order to reduce an error in the layer fusion process, the present invention further provides another embodiment, as shown in fig. 4, and fig. 4 shows a schematic flow diagram of another shooting method provided by the embodiment of the present invention. The first photographing parameter further includes a fourth sub-parameter, and the method further includes, before S206:
shooting the target scene according to the fourth sub-parameter to obtain a reference image;
and aligning the raindrop image layer, the background image layer and the middle distance object image layer with the reference image respectively.
That is, in this embodiment, in the process of continuously capturing multiple frames of images, one frame of image needs to be captured as a reference frame, and since the reference image is only used for subsequent layer alignment, how to capture an object in the reference image does not need to be very clear or have special effects, so the fourth sub-parameter in the present invention is not particularly limited when capturing the reference image. The sequence between the operation of capturing the reference image and S203, S204, and S205 is not limited.
In addition, the alignment operation is usually performed by comparing pixel points. Specifically, the alignment may be performed by using a part of the feature points as coordinate positions of the feature points, where the feature points may select pixel points at four corners or four edges of the image, or may select other feature points capable of positioning the image position. Through the alignment operation, the pixel points in each layer can correctly correspond, so that errors caused by pixel dislocation between layers are avoided, and the target image obtained by final fusion is more accurate.
In order to improve the accuracy of the extracted layer, in some embodiments of the present invention, after S205 and before S206, the method may further include: and denoising the shot first image, the shot second image and the shot third image. Alternatively, in another embodiment, since raindrops may be regarded as noise, only the second image and the third image in the above embodiment may be denoised in order to avoid removing raindrops as well. Or the denoising operation may be performed on the extracted three layers, or the extracted background layer and the middle-distance object layer may be denoised. The present invention is not limited to the specific method.
Based on the above method embodiment, correspondingly, the embodiment of the present invention further provides an electronic device, and referring to fig. 5 first, fig. 5 shows a schematic structural diagram of an electronic device provided by the embodiment of the present invention. The electronic device includes:
a first determining module 301, configured to determine a first attribute of a raindrop in a target scene;
a second determining module 302, configured to determine a first shooting parameter according to the first attribute;
and the shooting module 303 is configured to shoot the target scene according to the first shooting parameter to obtain a target image.
In the embodiment of the invention, when raindrops are contained in the target scene, the first shooting parameters are determined according to the first attributes of the raindrops in the target scene, and then shooting is carried out according to the first shooting parameters to obtain the target image. In the embodiment of the invention, the shooting parameters can be directly determined according to the actual condition of the current rain scene without manually setting and adjusting the parameters by a user, so that the operation process of rain scene shooting is simplified, and the finally obtained target image can have different effects according to the actual condition of the rain scene because the first attributes of raindrops are different and correspond to different first shooting parameters, so that the shooting effect of the rain scene is good.
In some embodiments of the invention, the electronic device may further comprise:
and the identification module is used for identifying whether the target scene is a rain scene. The specific identification mode may be: whether the preview image in the shooting interface is a rain scene image or not is identified.
In still other embodiments of the present invention, a corresponding relationship between the first attribute and the first shooting parameter may be preset in the electronic device, and after the first attribute is determined, the first shooting parameter corresponding to the first attribute is directly obtained. In addition, in other embodiments, the first shooting parameter corresponding to the first attribute may also be determined according to an input of a user.
In some embodiments of the present invention, the second determining module 302 specifically includes:
a first receiving unit for receiving a first input of a user;
an attribute determining unit configured to determine a second attribute of the raindrop in response to the first input; the second attribute is the attribute of raindrops in the image;
and the parameter determining unit is used for determining the first shooting parameter according to the first attribute and the second attribute.
The present embodiment determines a second attribute of the raindrop according to the input of the user, where the second attribute is used to reflect the raindrop photographing effect desired by the user. The first shooting parameters are determined according to the actual attributes of the raindrops and the expected attributes of the users to the raindrops, so that the final shooting effect can be guaranteed to meet the requirements of the users as much as possible, and the user experience is improved.
Based on the above embodiments, in some embodiments of the present invention, a bridge control may be displayed on a shooting interface of an electronic device, where the bridge control may include a first sliding shaft and a second sliding shaft, the first sliding shaft is provided with a first identifier, and the second sliding shaft is provided with a second identifier. And the input of the first identifier and the second identifier moved by the user is the first input. The bridge control can reduce occupation of positions in the shooting interface.
In other embodiments of the present invention, the first receiving unit may be further configured to: and receiving selection input of a user to a target shooting mode in a preset shooting mode list, wherein the preset shooting mode list comprises a plurality of shooting modes, and each shooting mode corresponds to different second attributes respectively.
That is, in this embodiment, the user can see the preset shooting mode list, thereby intuitively selecting the corresponding shooting mode. The mode enables the user to clearly know the selection range of the user and the selection is more intuitive.
In still other embodiments of the present invention, the first photographing parameter includes a first sub-parameter, a second sub-parameter, and a third sub-parameter;
the shooting module 303 specifically includes:
the first shooting unit is used for shooting a target scene according to the first sub-parameter to obtain a first image, and extracting a layer of the first image to obtain a raindrop layer;
the second shooting unit is used for shooting the target scene according to the second sub-parameters to obtain a second image, and extracting the layer of the second image to obtain a background layer;
the third shooting unit is used for shooting the target scene according to the third sub-parameters to obtain a third image, and extracting the image layer of the third image to obtain a middle-distance object image layer; wherein the map layer of the medium-distance object comprises surface water accumulation information;
and the fusion unit is used for fusing the raindrop image layer, the background image layer and the middle distance object image layer to obtain a target image.
In the embodiment of the invention, the raindrop image layer, the background image layer and the middle distance object image layer containing clear objects can be obtained, so that the rainview image obtained by fusing the three image layers not only contains clear raindrops, but also can contain clear background and middle distance objects, the whole rainview image is in a clear state, and the imaging effect of the rainview image is improved.
In still other embodiments of the present invention,
the first shooting unit may be specifically configured to: shooting a target scene according to the first sub-parameters to obtain a first image, and extracting the layer of the first image through a pre-trained first semantic segmentation model to obtain a raindrop layer;
the second shooting unit may be specifically configured to: shooting a target scene according to the second sub-parameters to obtain a second image, and extracting a layer of the second image through a pre-trained second semantic segmentation model to obtain a background layer;
the third shooting unit may be specifically configured to: and shooting the target scene according to the third sub-parameters to obtain a third image, and extracting the image layer of the third image through a pre-trained third semantic segmentation model to obtain a middle-distance object image layer.
Wherein the first image, the second image and the third image each comprise 2 frames of images.
The image layer extraction is carried out through the pre-trained semantic segmentation model, the accuracy of the extracted image layer can be improved, and under the condition, the number of the required image frames as input can be properly reduced, so that the extraction of a single image layer can be completed only by 2 frames, compared with the condition that 3 frames are required at present, the number of the required image frames as input is small, and the shooting performance experience is effectively improved.
In order to reduce an error in the layer fusion process, another embodiment of the present invention is further provided, where the first shooting parameter further includes a fourth sub-parameter, and the shooting module 303 further includes:
the fourth shooting unit is used for shooting the target scene according to the fourth sub-parameter to obtain a reference image;
and the alignment unit is used for aligning the raindrop image layer, the background image layer and the middle distance object image layer with the reference image respectively.
That is, in this embodiment, in the process of continuously capturing multiple frames of images, one frame of image needs to be captured as a reference frame, and since the reference image is only used for subsequent layer alignment, how to capture an object in the reference image does not need to be very clear or have special effects, so the fourth sub-parameter in the present invention is not particularly limited when capturing the reference image.
In order to improve the accuracy of the extracted layer, in some embodiments of the present invention, the capturing module 303 may further include:
and the denoising module is used for denoising the shot first image, the shot second image and the shot third image. Alternatively, in other embodiments, since raindrops may be regarded as noise, in order to avoid removing raindrops, the denoising module may be further configured to denoise only the second image and the third image in the above embodiments. Or, the denoising module may be further configured to perform the denoising operation on the extracted three layers, or denoise the extracted background layer and the middle-distance object layer. The present invention is not limited to the specific method.
Furthermore, in other embodiments of the present invention, the electronic device may further include:
and the prompting module is used for displaying prompting information of entering the rain scene shooting on a shooting interface of the electronic equipment under the condition that the target scene is identified as the rain scene.
Because the embodiment also displays the prompt message of entering the rain-scene shooting when recognizing that the framing is the rain-scene, namely, the user is informed that the user enters the rain-scene shooting process at present, the user can start to perform subsequent operation, and the use of the rain-scene imaging function by the user who cannot know the function of the electronic equipment is facilitated.
In a specific embodiment, the prompt module may be configured to:
identifying rain condition information in a target scene; acquiring initial shooting parameters corresponding to the rain condition information; and adjusting the display content in the shooting interface according to the initial shooting parameters, and taking the adjusted display content as prompt information.
According to the embodiment, the display content in the shooting interface is adjusted according to the initial shooting parameters, so that a user can visually see the shooting effect after the shooting mode is changed into the rain-scene shooting mode in the shooting interface, and the shooting effect is changed to highlight the rain-scene part, so that the user can directly know that the user enters the rain-scene shooting mode at present, and the effect of prompting the user is achieved.
The electronic device provided in the embodiment of the present invention can implement each method step implemented in the embodiments of fig. 1 and fig. 3, and is not described herein again to avoid repetition.
Based on the above method embodiment, the present invention further provides an embodiment of an electronic device, and referring to fig. 6, fig. 6 shows a hardware structure schematic diagram of an electronic device provided in the embodiment of the present invention.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 410 is configured to determine a first attribute of a raindrop in a target scene; determining a first shooting parameter according to the first attribute; and shooting the target scene according to the first shooting parameters to obtain a target image.
In the embodiment of the invention, when raindrops are contained in the target scene, the first shooting parameters are determined according to the first attributes of the raindrops in the target scene, and then shooting is carried out according to the first shooting parameters to obtain the target image. In the embodiment of the invention, the shooting parameters can be directly determined according to the actual condition of the current rain scene without manually setting and adjusting the parameters by a user, so that the operation process of rain scene shooting is simplified, and the finally obtained target image can have different effects according to the actual condition of the rain scene because the first attributes of raindrops are different and correspond to different first shooting parameters, so that the shooting effect of the rain scene is good.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 402, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the electronic apparatus 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The electronic device 400 also includes at least one sensor 405, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4041 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4041 and/or the backlight when the electronic device 400 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4041, and the Display panel 4041 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4041, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4041 according to the type of the touch event. Although in fig. 6, the touch panel 4071 and the display panel 4041 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 4071 and the display panel 4041 may be integrated to implement the input and output functions of the electronic device, and this is not limited herein.
The interface unit 408 is an interface for connecting an external device to the electronic apparatus 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; optionally, the processor 410 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 400 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and optionally, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 400 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes a processor 410, a memory 409, and a computer program that is stored in the memory 409 and can be run on the processor 410, and when being executed by the processor 410, the computer program implements each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (8)
1. A shooting method is applied to electronic equipment, and is characterized by comprising the following steps:
determining a first attribute of raindrops in a target scene;
determining a first shooting parameter according to the first attribute;
shooting the target scene according to the first shooting parameters to obtain a target image;
the first shooting parameters comprise a first sub-parameter, a second sub-parameter and a third sub-parameter;
the shooting the target scene according to the first shooting parameter to obtain a target image specifically includes:
shooting the target scene according to the first sub-parameter to obtain a first image, and performing layer extraction on the first image to obtain a raindrop layer;
shooting the target scene according to the second sub-parameters to obtain a second image, and performing layer extraction on the second image to obtain a background layer;
shooting the target scene according to the third sub-parameters to obtain a third image, and performing layer extraction on the third image to obtain a middle-distance object layer; the middle-distance object map layer comprises surface water accumulation information;
and fusing the raindrop image layer, the background image layer and the middle-distance object image layer to obtain the target image.
2. The method according to claim 1, wherein the first attribute comprises at least one of a speed and a size of the raindrop, and the first photographing parameter comprises a shutter speed.
3. The method according to claim 1, wherein determining a first shooting parameter according to the first attribute specifically includes:
receiving a first input of a user;
determining a second attribute of the raindrop in response to the first input; wherein the second attribute is an attribute of the raindrop in an image;
and determining a first shooting parameter according to the first attribute and the second attribute.
4. The method of claim 3, wherein the second attribute of the raindrop comprises a vertical display length of the raindrop.
5. The method of claim 1, wherein prior to determining the first attribute of the raindrops in the target scene, further comprising:
and displaying prompt information of entering the rain scene shooting on a shooting interface of the electronic equipment.
6. An electronic device, comprising:
the first determining module is used for determining a first attribute of raindrops in a target scene;
the second determining module is used for determining a first shooting parameter according to the first attribute;
the shooting module is used for shooting the target scene according to the first shooting parameters to obtain a target image;
the first shooting parameters comprise a first sub-parameter, a second sub-parameter and a third sub-parameter;
the shooting module specifically comprises:
the first shooting unit is used for shooting the target scene according to the first sub-parameter to obtain a first image, and extracting a layer of the first image to obtain a raindrop layer;
the second shooting unit is used for shooting the target scene according to the second sub-parameters to obtain a second image, and extracting the layer of the second image to obtain a background layer;
the third shooting unit is used for shooting the target scene according to the third sub-parameter to obtain a third image, and extracting the image layer of the third image to obtain a middle-distance object image layer; the middle-distance object map layer comprises surface water accumulation information;
and the fusion unit is used for fusing the raindrop image layer, the background image layer and the middle-distance object image layer to obtain the target image.
7. The electronic device according to claim 6, wherein the second determining module specifically includes:
a first receiving unit for receiving a first input of a user;
an attribute determining unit configured to determine a second attribute of the raindrop in response to the first input; wherein the second attribute is an attribute of the raindrop in an image;
and the parameter determining unit is used for determining a first shooting parameter according to the first attribute and the second attribute.
8. The electronic device of claim 6, further comprising:
and the prompting module is used for displaying the prompting information of entering the rain scene shooting on the shooting interface of the electronic equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911378833.0A CN111050081B (en) | 2019-12-27 | 2019-12-27 | Shooting method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911378833.0A CN111050081B (en) | 2019-12-27 | 2019-12-27 | Shooting method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111050081A CN111050081A (en) | 2020-04-21 |
CN111050081B true CN111050081B (en) | 2021-06-11 |
Family
ID=70239647
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911378833.0A Active CN111050081B (en) | 2019-12-27 | 2019-12-27 | Shooting method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111050081B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114125300B (en) * | 2021-11-29 | 2023-11-21 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and readable storage medium |
CN115623323A (en) * | 2022-11-07 | 2023-01-17 | 荣耀终端有限公司 | Shooting method and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102735867A (en) * | 2012-06-01 | 2012-10-17 | 华南理工大学 | Raindrop terminal velocity photographic surveying method |
CN106954051A (en) * | 2017-03-16 | 2017-07-14 | 广东欧珀移动通信有限公司 | A kind of image processing method and mobile terminal |
CN108022283A (en) * | 2017-12-06 | 2018-05-11 | 北京像素软件科技股份有限公司 | Rainwater analogy method, device and readable storage medium storing program for executing |
CN108270970A (en) * | 2018-01-24 | 2018-07-10 | 北京图森未来科技有限公司 | A kind of Image Acquisition control method and device, image capturing system |
CN209433555U (en) * | 2019-01-28 | 2019-09-24 | 北京昊恩星美科技有限公司 | A kind of Car license recognition all-in-one machine with rainproof function |
CN110610190A (en) * | 2019-07-31 | 2019-12-24 | 浙江大学 | Convolutional neural network rainfall intensity classification method for rainy pictures |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7660517B2 (en) * | 2005-03-16 | 2010-02-09 | The Trustees Of Columbia University In The City Of New York | Systems and methods for reducing rain effects in images |
FR2940000A1 (en) * | 2008-12-17 | 2010-06-18 | Bernard Taillade | PERIMETER SAFETY SYSTEM BY ACTIVE ANALYSIS OF THE IMAGE OF A VIDEO CAMERA |
KR102155521B1 (en) * | 2014-05-23 | 2020-09-14 | 삼성전자 주식회사 | Method and apparatus for acquiring additional information of electronic devices having a camera |
US9967467B2 (en) * | 2015-05-29 | 2018-05-08 | Oath Inc. | Image capture with display context |
CN105554374A (en) * | 2015-11-30 | 2016-05-04 | 东莞酷派软件技术有限公司 | Shooting prompting method and user terminal |
US20170163877A1 (en) * | 2015-12-08 | 2017-06-08 | Le Holdings (Beijing) Co., Ltd. | Method and electronic device for photo shooting in backlighting scene |
CN106341595A (en) * | 2016-08-26 | 2017-01-18 | 刘华英 | Mobile terminal intelligent shooting method and device |
CN108200351A (en) * | 2017-12-21 | 2018-06-22 | 深圳市金立通信设备有限公司 | Image pickup method, terminal and computer-readable medium |
-
2019
- 2019-12-27 CN CN201911378833.0A patent/CN111050081B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102735867A (en) * | 2012-06-01 | 2012-10-17 | 华南理工大学 | Raindrop terminal velocity photographic surveying method |
CN106954051A (en) * | 2017-03-16 | 2017-07-14 | 广东欧珀移动通信有限公司 | A kind of image processing method and mobile terminal |
CN108022283A (en) * | 2017-12-06 | 2018-05-11 | 北京像素软件科技股份有限公司 | Rainwater analogy method, device and readable storage medium storing program for executing |
CN108270970A (en) * | 2018-01-24 | 2018-07-10 | 北京图森未来科技有限公司 | A kind of Image Acquisition control method and device, image capturing system |
CN209433555U (en) * | 2019-01-28 | 2019-09-24 | 北京昊恩星美科技有限公司 | A kind of Car license recognition all-in-one machine with rainproof function |
CN110610190A (en) * | 2019-07-31 | 2019-12-24 | 浙江大学 | Convolutional neural network rainfall intensity classification method for rainy pictures |
Also Published As
Publication number | Publication date |
---|---|
CN111050081A (en) | 2020-04-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111083380B (en) | Video processing method, electronic equipment and storage medium | |
CN109361865B (en) | Shooting method and terminal | |
CN111182205B (en) | Photographing method, electronic device, and medium | |
CN107592466B (en) | Photographing method and mobile terminal | |
CN108495029B (en) | Photographing method and mobile terminal | |
CN108989678B (en) | Image processing method and mobile terminal | |
CN111010508B (en) | Shooting method and electronic equipment | |
CN107948505B (en) | Panoramic shooting method and mobile terminal | |
CN111064895B (en) | Virtual shooting method and electronic equipment | |
CN110930329A (en) | Starry sky image processing method and device | |
CN109819168B (en) | Camera starting method and mobile terminal | |
CN109495616B (en) | Photographing method and terminal equipment | |
CN111601032A (en) | Shooting method and device and electronic equipment | |
CN111246102A (en) | Shooting method, shooting device, electronic equipment and storage medium | |
CN111752450A (en) | Display method and device and electronic equipment | |
CN109246351B (en) | Composition method and terminal equipment | |
CN108174110B (en) | Photographing method and flexible screen terminal | |
CN111050081B (en) | Shooting method and electronic equipment | |
CN111176526B (en) | Picture display method and electronic equipment | |
CN109639981B (en) | Image shooting method and mobile terminal | |
CN111064888A (en) | Prompting method and electronic equipment | |
JP7472281B2 (en) | Electronic device and focusing method | |
CN110602384B (en) | Exposure control method and electronic device | |
CN110913133B (en) | Shooting method and electronic equipment | |
CN111432122B (en) | Image processing method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |