Nothing Special   »   [go: up one dir, main page]

CN108156369B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN108156369B
CN108156369B CN201711277634.1A CN201711277634A CN108156369B CN 108156369 B CN108156369 B CN 108156369B CN 201711277634 A CN201711277634 A CN 201711277634A CN 108156369 B CN108156369 B CN 108156369B
Authority
CN
China
Prior art keywords
image
target
main
brightness
target scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711277634.1A
Other languages
Chinese (zh)
Other versions
CN108156369A (en
Inventor
谭国辉
姜小刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711277634.1A priority Critical patent/CN108156369B/en
Publication of CN108156369A publication Critical patent/CN108156369A/en
Application granted granted Critical
Publication of CN108156369B publication Critical patent/CN108156369B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method and device, wherein the method comprises the following steps: controlling a main camera to shoot a first main image of a target scene according to standard exposure parameters, controlling a secondary camera to shoot a first secondary image of the target scene according to the standard exposure parameters, and controlling the main camera to shoot a second main image of the target scene according to overexposure parameters; calculating first depth-of-field information of a target image in a target scene according to a first main image and a first auxiliary image through a first thread, simultaneously acquiring a first background image from the first main image through a second thread, and simultaneously acquiring the first target image from the second main image through a third thread; and synthesizing the first background image and the first target image according to the first depth of field information of the target image to obtain a target scene image. Therefore, the target image and the background image in the target scene are shot by adopting the proper exposure parameters respectively, and the shooting effect and the image processing efficiency are improved.

Description

Image processing method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
At present, the photographing function of the terminal device is applied to daily production and life of users, and photographing still becomes a wide demand, and users record life and the like through the photographing function of the terminal device.
In the related art, in order to ensure the imaging effect of the whole image, exposure parameters are determined according to the average brightness of a shooting scene, the brightness of a current shooting subject is different from the brightness of an environment, the determined shooting exposure parameters are influenced by the environment light and are not suitable for the current shooting subject, or the shooting exposure parameters are influenced by a ring shooting subject, so that the background area is underexposed, and the shooting image effect is poor.
Content of application
The application provides an image processing method and device, which aim to solve the technical problem that in the prior art, the brightness of a shooting main body and the brightness of a background area are corresponding to each other, so that the shooting effect is poor.
An embodiment of the present application provides an image processing method, including: controlling a main camera to shoot a first main image of a target scene according to standard exposure parameters, controlling a secondary camera to shoot a first secondary image of the target scene according to the standard exposure parameters, and controlling the main camera to shoot a second main image of the target scene according to overexposure parameters; calculating first depth of field information of a target image in the target scene according to the first main image and the first auxiliary image through a first thread, simultaneously acquiring a first target image from the first main image through a second thread, and simultaneously acquiring a first background image from the second main image through a third thread; and synthesizing the first target image and the first background image according to the first depth of field information of the target image to obtain a target scene image.
Another embodiment of the present application provides an image processing apparatus, including: the shooting module is used for controlling the main camera to shoot a first main image of a target scene according to standard exposure parameters, controlling the auxiliary camera to shoot a first auxiliary image of the target scene according to the standard exposure parameters, and controlling the main camera to shoot a second main image of the target scene according to overexposure parameters; the acquisition module is used for calculating first depth of field information of a target image in the target scene according to the first main image and the first auxiliary image through a first thread, acquiring a first target image from the first main image through a second thread, and acquiring a first background image from the second main image through a third thread; and the processing module is used for synthesizing the first target image and the first background image according to the first depth of field information of the target image to acquire a target scene image.
Yet another embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores computer-readable instructions, and the instructions, when executed by the processor, cause the processor to execute the image processing method according to the above-mentioned embodiment of the present application.
Yet another embodiment of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the image processing method according to the above-mentioned embodiment of the present application.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the method comprises the steps of controlling a main camera to shoot a first main image of a target scene according to standard exposure parameters, controlling a secondary camera to shoot a first secondary image of the target scene, controlling a main camera to shoot a second main image of the target scene according to overexposure parameters, calculating first depth-of-field information of the target image in the target scene according to the first main image and the first secondary image through a first thread, obtaining a first target image from the first main image through a second thread, obtaining a first background image from the second main image through a third thread, and synthesizing the first target image and the first background image according to the first depth-of-field information of the target image. Therefore, the imaging effect of the whole image can be ensured, especially the imaging effect of the whole image when the difference between the ambient brightness and the brightness of the shooting subject is large, and the image processing efficiency is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of an image processing method according to one embodiment of the present application;
FIG. 2 is a schematic diagram of triangulation according to one embodiment of the present application;
FIG. 3 is a schematic diagram of dual-camera depth information acquisition according to one embodiment of the present application;
FIG. 4 is a schematic view of a scene of an image processing method according to an embodiment of the present application;
FIG. 5 is a diagram illustrating the effect of image processing according to the prior art;
FIG. 6 is a diagram illustrating the effects of image processing according to one embodiment of the present application;
FIG. 7 is a flow diagram of an image processing method according to another embodiment of the present application;
FIG. 8 is a schematic view of a scene of an image processing method according to another embodiment of the present application;
FIG. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic configuration diagram of an image processing apparatus according to another embodiment of the present application;
fig. 11 is a schematic configuration diagram of an image processing apparatus according to still another embodiment of the present application; and
FIG. 12 is a schematic diagram of an image processing circuit according to one embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
An image processing method and apparatus of an embodiment of the present application are described below with reference to the drawings.
The execution main body of the image processing method and device can be terminal equipment, wherein the terminal equipment can be hardware equipment with double cameras, such as a mobile phone, a tablet computer, a personal digital assistant and a wearable device. This wearable equipment can be intelligent bracelet, intelligent wrist-watch, intelligent glasses etc..
Based on the above analysis, it can be known that the difference between the brightness of the main shooting body and the ambient brightness may affect each other, so that the shooting effect is not good, especially when the difference between the brightness of the main shooting body and the ambient brightness is large, the exposure parameter of the main shooting body is easily affected by the ambient brightness to cause improper exposure, and the shooting parameter of the background area is easily affected by the brightness of the main shooting body to cause improper exposure, for example, when a cast member under a stage lighting is shot at a dark audience, an image corresponding to the cast member is overexposed, and the fine and smooth exposure in the audience seat is not sufficient.
In order to solve the above technical problem, the image processing method according to the embodiment of the present application performs appropriate exposure based on the shooting background and the shooting subject, and then synthesizes the shooting subject with a good exposure effect and the background region, so that not only is the exposure effect of the image subject improved, but also the details of the whole image are ensured to be rich.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application, as shown in fig. 1, the method including:
step 101, controlling the main camera to shoot a first main image of the target scene according to the standard exposure parameters, and simultaneously controlling the auxiliary camera to shoot a first auxiliary image of the target scene according to the standard exposure parameters.
And 102, excessively controlling the main camera to shoot a second main image of the target scene according to the overexposure parameter.
Specifically, the terminal device takes a picture through a dual-camera system, the dual-camera system calculates depth of field information through a main image taken by a main camera and a sub-image taken by a sub-camera, wherein the dual-camera system includes a main camera for acquiring the main image of a shooting subject and a sub-camera for assisting the main image to acquire the depth of field information, the main camera and the sub-camera may be arranged along a horizontal direction or a vertical direction, for more clearly describing how the dual-camera acquires the depth of field information, the following explains a principle that the dual-camera acquires the depth of field information with reference to the accompanying drawings:
in practical application, the information of depth of field resolved by human eyes mainly depends on binocular vision, which is the same as the principle of depth of field resolved by two cameras, and is realized mainly by the principle of triangulation distance measurement as shown in fig. 2, based on fig. 2, in the actual space, the imaging object is drawn, and the positions O of the two cameras are shownRAnd OTAnd focal planes of the two cameras, wherein the distance between the focal planes and the plane where the two cameras are located is f, and the two cameras perform imaging at the focal planes, so that two shot images are obtained.
Wherein P and P' are respectively of the same subject in different captured imagesLocation. Wherein the distance from the P point to the left boundary of the shot image is XRThe distance of the P' point from the left boundary of the shot image is XT。ORAnd OTThe two cameras are respectively arranged on the same plane, and the distance is B.
Based on the principle of triangulation, the distance Z between the object and the plane where the two cameras are located in fig. 2 has the following relationship:
Figure BDA0001496903620000041
based on this, can be derived
Figure BDA0001496903620000042
Where d is a distance difference between positions of the same object in different captured images. B, f is constant, so the distance Z of the object can be determined from d.
Of course, in addition to the triangulation method, other methods may also be used to calculate the depth of field information of the main image, for example, when the main camera and the sub-camera take a picture of the same scene, the distance between an object in the scene and the sub-camera is proportional to the displacement difference, the attitude difference, and the like of the images formed by the main camera and the sub-camera, and therefore, in an embodiment of the present application, the distance Z may be obtained according to the proportional relationship.
For example, as shown in fig. 3, a map of differences between the main image captured by the main camera and the sub image captured by the sub camera is calculated, and this map is represented by a disparity map, which represents the difference in displacement between the same points on the two maps, but since the difference in displacement in triangulation is proportional to Z, the disparity map is often used as the depth information map as it is.
Specifically, in the embodiment of the present application, the depth of field information for the same object in the main image and the sub image captured by the main camera is calculated to obtain the depth of field information for the main image and the sub image, and the main image is used as the base image of the actual image to be finally imaged, and in order to avoid that the depth of field information is calculated by the main image and the sub image, the depth of field information is not calculated accurately due to the large difference between the main image and the sub image, or the imaging effect of the final imaging is not good due to the unclear main image, the main camera is controlled to capture the first main image of the target scene according to the standard exposure parameter, which is based on the subject captured in the target scene, that is, if the brightness of the subject captured is strong, the aperture size and the like corresponding to the standard exposure parameter are small, if the brightness of the shooting subject is darker, the aperture corresponding to the standard exposure parameter is larger, and the like, and based on the priori knowledge, the exposure parameter acquired by the shooting subject focusing on the current shooting scene can be used as the first exposure parameter.
Accordingly, the first main image and the first sub image of the target scene photographed based on the standard exposure parameters include a relatively clear image of the photographic subject, and the depth information of the photographic subject can be further determined based on the first main image and the first sub image.
In addition, when the current ambient brightness is relatively dark, the exposure parameter is affected by the shooting subject with relatively bright light, so as to cause underexposure, for example, the current ambient brightness is 50, while the brightness of the shooting subject is 100, which may be affected by the brightness of the shooting subject, and the brightness corresponding to the finally determined shooting exposure parameter is 60, which obviously makes the background area in the shot image unclear due to underexposure, in the embodiment of the present application, the main camera is controlled to shoot the second main image of the target scene according to the overexposure parameter, so as to ensure that the background area in the shot target scene is fully exposed, wherein the overexposure parameter is relative to the standard exposure parameter, the overexposure parameter is relative to the brightness of the background area in the currently shot target scene, and the brightness of the background area is lower relative to the brightness of the shooting subject, the larger the overexposure parameter is than the standard exposure parameter.
Certainly, the imaging effect is poor in a dark environment, so in the embodiment of the present application, only a scene of a photographic subject with high brightness is taken as an example for explanation in a common dark environment, in practical applications, there is also a photographic scene, in order to take a photographic subject with low brightness in a strong light environment, in this case, the main camera may be controlled to take a first main image of a target scene according to the standard exposure parameters, and the auxiliary camera may be controlled to take a first auxiliary image of the target scene at the same time, and the main camera may be controlled to take a second main image of the target scene according to the underexposure parameters.
In order to further improve the flexibility of the image processing of the present application, the image processing method of the embodiment of the present application may be implemented only when the brightness difference between the shooting environment and the shooting subject is large, so that it is difficult to ensure a high imaging effect of the whole image.
Specifically, in an embodiment of the present application, a first brightness of an environment where a shooting subject is located in a target scene and a second brightness of the shooting subject are detected, for example, by performing respective focusing and photometry on an area where the shooting subject is located and an environment area where the shooting subject is located in a preview picture of the shooting target scene, and determining the first brightness and the second brightness according to focusing and photometry parameter values (aperture size, etc.), for example, when a user frequently shoots a fixed scene, for example, frequently sees a performance and shoots an actor on a stage, a corresponding relationship between the shooting scene and a difference between the first brightness and the second brightness or between the first brightness and the second brightness may be preset and stored, so as to match a pre-stored scene of a current shooting scene, and obtain a corresponding difference between the first brightness and the second brightness or between the first brightness and the second brightness, for another example, a deep learning model may be constructed in advance from a large amount of test data, and the input of the deep learning model is the brightness distribution of the photographed scene, such as whether the boundary of the finder frame is brighter or the center of the finder frame is brighter, and the difference between the first brightness and the second brightness or the first brightness and the second brightness is output. The method comprises the steps of shooting an image of a target scene, wherein a first brightness of a shooting environment corresponds to a brightness of a background area of a target image in the shot target scene, a second brightness of the target image corresponds to a brightness of a current shooting subject, if it is determined that a difference value between the second brightness and the first brightness is larger than or equal to a preset threshold value, controlling a main camera to shoot a first main image of the target scene according to standard exposure parameters, controlling a sub camera to shoot a first sub image of the target scene, and controlling the main camera to shoot a second main image of the target scene according to overexposure parameters.
The preset threshold may be a reference brightness value calibrated according to data of a large number of experiments and used for determining whether the ambient brightness and the brightness of the shooting subject affect the imaging effect, and may also be related to imaging hardware of the terminal device, where the higher the photosensitivity of the imaging hardware is, the higher the preset threshold is.
Step 103, calculating first depth information of a target image in a target scene according to the first main image and the first sub-image through the first thread, simultaneously acquiring a first target from the first main image through the second thread, and simultaneously acquiring a first background image from the second main image through the third thread.
And 104, synthesizing the first target image and the first background image according to the first depth of field information of the target image to obtain a target scene image.
Based on the above analysis, when the two cameras acquire the depth of field information, the positions of the same object in different captured images need to be acquired, and if the image corresponding to the acquired depth of field information is clearer, the acquired depth of field information is more accurate, so in this example, the first depth of field information of the target image (subject) is acquired based on the first main image and the first sub image under the standard exposure parameter, and therefore, since the imaging effect of the first main image and the first sub image acquired under the standard exposure parameter is better, the image is clearer, and the acquired first depth of field information is more accurate.
Wherein, as analyzed above, the first main image is exposed based on the brightness of the photographic subject, therefore, the imaging effect of the target image corresponding to the subject in the first main image is good, the second main image is exposed based on the brightness of the currently photographed target scene, therefore, the imaging effect of the background area photographed in the second main image is good, and in order to improve the imaging effect of the entire image, in one embodiment of the present application, a first target image is extracted based on a first main image, the first target image can be determined by image recognition, contour recognition, and the like, a first background image is obtained based on a second main image, and further, the image generated by the synthesis processing according to the first main image and the first background image has abundant details of the shooting subject and the background area, and the imaging effect of the whole image is good.
In the actual execution process, in order to make the synthesis effect of the first target image and the first background image more natural, the first target image and the first background image are synthesized according to the first depth information of the target image, wherein the specific implementation modes of synthesizing the first target image and the first background image according to the first depth information of the target image are different according to different application scenes:
as a possible implementation manner, since the larger the depth information is, the smaller the object in the image is, and the smaller the depth information is, the larger the object in the image is, the size of the relevant area in the first target image can be adaptively adjusted based on the first depth information, and the first target image and the first background image are synthesized to avoid distortion and the like caused by improper size when the first target image and the first background image are synthesized.
As another possible implementation manner, since the larger the depth information is, the more blurred the image is, and the smaller the depth information is, the sharper the image is, the first target image may be pixel-filled based on the first depth information and synthesized with the first background image, so that the synthesis effect of the two images is more natural.
Further, in order to further optimize the imaging effect, in an embodiment of the present application, the first background image may be further blurred according to the first depth information of the target image.
Specifically, according to different application scenes, the blurring process may be performed on the first background image according to the first depth information of the target image in a plurality of different manners:
as a possible implementation: the blurring strength of the first background image is determined according to the first depth of field information of the target image, and then blurring processing is carried out on the corresponding background area according to the blurring strength of the first background area, so that blurring of different degrees is carried out according to different depth of field information, and the blurring image is more natural in effect and rich in layering.
It should be noted that, according to different application scenarios, different implementation manners may be adopted to determine the blurring strength of the first background region according to the depth information of the first target image, as a possible implementation manner, when the depth information of the first target image is more accurate, it is proved that the outline of the first target object is clearer, at this time, even if the blurring process is performed on the first background region more unlikely to cause false blurring of the first target image, the blurring strength corresponding to the background region may be larger at this time, so that a correspondence between the calculation accuracy of the depth information of the first target image and the blurring strength corresponding to the first background region may be established in advance, and further, the blurring strength corresponding to the background region may be obtained according to the correspondence.
As another possible implementation manner, the farther the position of the first target image of the first background image is, the less relevant the first target image of the first background region is proved, and therefore, the blurring strength may be set according to the distance from each pixel point in the first background image to the first target image, and the farther the distance from the pixel point in the first background image to the first target image is, the higher the blurring strength is.
Furthermore, in the embodiment of the present application, since the calculation of the depth information takes a long time, the first depth information of the target image in the target scene is calculated by the first thread according to the first main image and the first sub-image, and at the same time, the first target image is acquired from the first main image by the second thread, and at the same time, the first background image is acquired from the second main image by the third thread.
Therefore, on one hand, as shown in fig. 4, after the first depth of field information is calculated, the first target image and the first background image are acquired, so that after the first depth of field information is acquired, corresponding image synthesis processing can be directly performed according to the first depth of field information, the first target image and the first background image, and compared with a processing mode of acquiring depth of field information first and then performing synthesis processing on a plurality of frames of main images shot, the image processing efficiency is improved, on the other hand, according to the images shot under standard exposure parameters and overexposure parameters, the first target image and the first background image are acquired, corresponding to the main images and the background images shot with rich image details, compared with a mode of shooting a plurality of frames of images by adopting different exposure parameters and then synthesizing, the image with good imaging effect of the whole image can be acquired only by shooting twice, in another aspect, with reference to fig. 4, the first target image and the first background image to be synthesized are split into two parallel threads for running, so that the time for calculating the depth of field and the time difference between the acquisition of the first target image and the acquisition of the first background image are further shortened, and the image processing efficiency is improved.
In order to more clearly illustrate the effect of the image processing in the embodiment of the present application, the following example is taken in conjunction with a specific application scenario:
the first scenario is:
when an auditorium with poor ambient light photographs actors on a stage, an image acquired in a photographing mode in the prior art is shown in fig. 5, a background area is under-exposed, so that the background area is not clear, and an area where the actors are located is over-exposed, so that the area where the actors are located is not clear (in the figure, the image definition is represented by the gray scale of the image, and the clearer the image, the lower the gray scale of the corresponding image area is).
After the image processing method is adopted, the main camera is controlled to shoot a first main image of a target scene according to standard exposure parameters, the auxiliary camera is controlled to shoot a first auxiliary image of the target scene, the main camera is controlled to shoot a second main image of the target scene according to overexposure parameters, first depth of field information of an area where an actor is located in the target scene is calculated through the first thread according to the first main image and the first auxiliary image, the image of the actor is obtained from the first main image through the second thread, a background image is obtained from the second main image through the third thread, and after the actor image and the first background image are subjected to synthesis processing according to the first depth of field information of the actor image, as shown in fig. 6, proper exposure is obtained for both the actor and the background area, so that the whole image is relatively clear.
In summary, the image processing method according to the embodiment of the present application controls the main camera to shoot a first main image of a target scene according to a standard exposure parameter, controls the auxiliary camera to shoot a first auxiliary image of the target scene, and controls the main camera to shoot a second main image of the target scene according to an overexposure parameter, calculates first depth-of-field information of the target image in the target scene according to the first main image and the first auxiliary image through a first thread, acquires the first target image from the first main image through a second thread, acquires a first background image from the second main image through a third thread, and then performs synthesis processing on the first target image and the first background image according to the first depth-of-field information of the target image. Therefore, the imaging effect of the whole image can be ensured, especially the imaging effect of the whole image when the difference between the ambient brightness and the brightness of the shooting subject is large, and the image processing efficiency is improved.
Based on the above description of the embodiment, when the difference between the ambient brightness and the brightness of the subject is not large, the effect of displaying the whole image may not be greatly affected by the image captured based on the uniform exposure parameter, and the exposure parameter may ensure both clear imaging of the subject and clear imaging of the background region, so as to achieve the imaging effect of optimizing the captured image.
Specifically, in another embodiment of the present application, as shown in fig. 7, the step 101 may further include:
step 201, detecting a first brightness of a shooting environment and a second brightness of a target image.
Step 202, if it is detected that the difference between the second brightness and the first brightness is smaller than the preset threshold, determining an exposure parameter according to the difference.
Specifically, if the difference between the second brightness and the first brightness is smaller than the preset threshold value, the exposure parameter is determined according to the difference, and the exposure parameter is suitable for clear imaging of both the background area and the shooting subject.
As a possible implementation manner, a corresponding relationship between the difference between the second brightness and the first brightness and the exposure parameter may be established in advance according to a large amount of experimental data, and after the difference between the current second brightness and the first brightness is obtained, the corresponding relationship is queried to obtain the exposure parameter corresponding to the current shot target scene.
And 203, controlling the main camera to shoot a plurality of groups of main images on the target scene under the exposure parameter environment, and simultaneously controlling the auxiliary camera to shoot a plurality of groups of auxiliary images on the target scene.
Specifically, in order to improve the imaging effect, the main camera is controlled to shoot a plurality of groups of main images on the target scene under the exposure parameter environment, and meanwhile, the auxiliary camera is controlled to shoot a plurality of groups of auxiliary images on the target scene.
Step 204, a reference main image is obtained from the plurality of groups of main images, and a reference sub-image shot in the same group with the reference main image is obtained from the plurality of groups of sub-images.
It can be understood that in the embodiment of the application, because the main camera and the auxiliary camera shoot a plurality of groups of main images and a plurality of groups of auxiliary images at the same time, the image information of the main images and the auxiliary images which belong to the same group and are shot at the same time point is relatively close, and the obtained depth of field information can be ensured to be relatively accurate by calculating the depth of field information according to the source main images and the auxiliary images.
Specifically, a reference main image is selected from a plurality of groups of main images, and a reference sub-image captured in the same group as the reference main image is selected from a plurality of groups of sub-images, it should be emphasized that, in the actual capturing process, the main image and the sub-image capture a plurality of groups of images at the same frequency, wherein the main image and the sub-image captured at the same time belong to the same group of images, for example, in chronological order, the plurality of groups of main images captured by the main camera include a main image 11 and a main image 12 …, and the plurality of groups of main images captured by the sub-camera include a sub-image 21 and a sub-image 22 …, the main image 11 and the sub-image 21 are the same group of images, the main image 12 and the sub-image 22 are the same group of images …, in order to further improve the efficiency and accuracy of depth information acquisition, a reference sub-image with higher definition may be selected from the plurality, in order to improve the selection efficiency, a plurality of frames of main images and a plurality of corresponding frames of sub-images can be selected preliminarily according to the image definition and the like, and a reference main image and a corresponding reference sub-image can be selected from the plurality of frames of main images and the plurality of corresponding frames of sub-images with higher definition.
Step 205, synthesizing and denoising multiple groups of main images through a first thread to generate a target main image, acquiring a second target image from the target main image, meanwhile, calculating second depth information of the target image in a target scene according to a reference main image and a reference sub-image through the second thread, and meanwhile, acquiring a second background image from the reference main image through a third thread.
And step 206, synthesizing the second background image and the second target image according to the second depth information of the target image.
Specifically, as shown in fig. 8, in an embodiment of the present application, a plurality of groups of main images are synthesized and noise-reduced by a first thread to generate a target main image, a second target image is acquired from the target main image, second depth information of the target image in a target scene is calculated by a second thread according to a reference main image and a reference sub-image (the reference main image and the reference sub-image are second frame images captured by a main camera and a sub-camera), a second background image is acquired by a third thread from the reference main image, and the second background image and the second target image are synthesized according to the second depth information of the target image, so that not only is image processing efficiency improved, but also a second target image is acquired from the target main image after noise reduction is synthesized, and the image definition is further improved.
For the convenience of clearly understanding the multi-frame synthesis noise reduction process, the following description will be made on the multi-frame synthesis noise reduction of the main image in a scene with poor light conditions.
When the ambient light is insufficient, imaging devices such as terminal devices generally adopt a mode of automatically improving the light sensitivity to shoot. However, this method of increasing the sensitivity results in more noise in the image. The multi-frame synthesis noise reduction aims to reduce noise points in an image and improve the image quality of the image shot under the high-light-sensitivity condition. The principle of the method lies in that noise points are the prior knowledge of disordered arrangement, specifically, after a plurality of groups of shot images are continuously shot, the noise points appearing at the same position can be red noise points, green noise points and white noise points, even no noise points, so that a comparison and screening condition is provided, and the noise points (namely the noise points) can be screened out according to the values of all the pixel points corresponding to the same position in the plurality of groups of shot images (the values of the pixel points comprise the number of the pixels contained in the pixels, the more the pixels are contained, the higher the value of the pixel points is, the clearer the corresponding image is).
Furthermore, after the noise is screened out, color guessing and pixel replacement processing can be carried out on the noise according to an algorithm of a one-step method, and the effect of removing the noise is achieved. Through such a process, a noise reduction effect with extremely low image quality loss can be achieved.
For example, as a simpler and more convenient multi-frame synthesis noise reduction method, after a plurality of groups of shot images are obtained, values of pixel points corresponding to the same position in the plurality of groups of shot images are read, and a weighted average value is calculated for the pixel points to generate a value of the pixel point at the position in the synthesized image. In this way, a sharp image can be obtained.
Of course, in this embodiment, in order to further improve the image processing effect, the blurring processing may also be performed on the second background image according to the second depth information of the target image, and the blurring processing manner is similar to the blurring processing manner described in the above embodiment for the first background image, and is not described herein again.
To sum up, the image processing method of the embodiment of the application performs shooting based on uniform exposure parameters when the difference between the ambient brightness and the shooting subject is not large, thereby not only reducing the processing pressure of the terminal device, but also ensuring the imaging effect of the image.
In order to achieve the above embodiments, the present application also proposes an image processing apparatus, and fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, as shown in fig. 9, the image processing apparatus including: a photographing module 100, an acquisition module 200 and a processing module 300.
The shooting module 100 is configured to control the main camera to shoot a first main image of the target scene according to the standard exposure parameters, and control the sub camera to shoot a first sub image of the target scene according to the standard exposure parameters.
The capture module 100 is further configured to control the main camera to capture a second main image of the target scene according to the overexposure parameter.
The obtaining module 200 is configured to calculate, by a first thread, first depth-of-field information of a target image in a target scene according to a first main image and a first sub-image, obtain, by a second thread, the first target image from the first main image, and obtain, by a third thread, a first background image from the second main image.
The processing module 300 is configured to perform synthesis processing on the first target image and the first background image according to the first depth-of-field information of the target image, so as to obtain a target scene image.
In one embodiment of the present application, as shown in fig. 10, the apparatus further comprises a detection module 400 and a determination module 500, wherein,
the detecting module 400 is configured to detect a first brightness of an environment where a subject is located in a target scene and a second brightness of the subject.
The determining module 500 is configured to determine that a difference between the second luminance and the first luminance is greater than or equal to a preset threshold.
Further, in one embodiment of the present application, as shown in fig. 11, the photographing module 100 includes a blurring unit 110.
The blurring unit 110 is configured to perform blurring processing on the first background image according to the first depth-of-field information of the target image, and obtain the target scene image with a blurred background.
It should be noted that the foregoing description of the method embodiments is also applicable to the apparatus in the embodiments of the present application, and the implementation principles thereof are similar and will not be described herein again.
The division of the modules in the image processing apparatus is only for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
In summary, the image processing apparatus according to the embodiment of the present application controls the main camera to shoot a first main image of a target scene according to the standard exposure parameter, controls the sub camera to shoot a first sub image of the target scene according to the standard exposure parameter, and controls the main camera to shoot a second main image of the target scene according to the overexposure parameter, calculates first depth-of-field information of the target image in the target scene according to the first main image and the first sub image through the first thread, acquires the first target image from the first main image through the second thread, acquires the first background image from the second main image through the third thread, and then performs synthesis processing on the first target image and the first background image according to the first depth-of-field information of the target image. Therefore, the imaging effect of the whole image can be ensured, especially the imaging effect of the whole image when the difference between the ambient brightness and the brightness of the shooting subject is large, and the image processing efficiency is improved.
In order to implement the above embodiments, the present application further proposes a computer device, where the computer device is any device including a memory for storing a computer program and a processor for running the computer program, such as a smart phone, a personal computer, and the like, and the computer device further includes an Image Processing circuit, and the Image Processing circuit may be implemented by using hardware and/or software components and may include various Processing units for defining an ISP (Image Signal Processing) pipeline. FIG. 12 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 12, for convenience of explanation, only aspects of the image processing technique related to the embodiment of the present application are shown.
As shown in fig. 12, the image processing circuit includes an ISP processor 1040 and control logic 1050. The image data captured by the imaging device 1010 is first processed by the ISP processor 1040, and the ISP processor 1040 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 1010. The imaging device 1010 (camera) may include a camera with one or more lenses 1012 and an image sensor 1014, wherein to implement the background blurring processing method of the present application, the imaging device 1010 includes two sets of cameras, wherein, with continued reference to fig. 12, the imaging device 1010 may capture images of a scene based on a primary camera and a secondary camera simultaneously. The image sensor 1014 may include an array of color filters (e.g., Bayer filters), and the image sensor 1014 may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor 1014 and provide a set of raw image data that may be processed by the ISP processor 1040, wherein the ISP processor 1040 may calculate depth information, etc., based on the raw image data acquired by the image sensor 1014 in the primary camera and the raw image data acquired by the image sensor 1014 in the secondary camera provided by the sensor 1020. The sensor 1020 may provide the raw image data to the ISP processor 1040 based on the sensor 1020 interface type. The sensor 1020 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
The ISP processor 1040 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 1040 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 1040 may also receive pixel data from image memory 1030. For example, raw pixel data is sent from the sensor 1020 interface to the image memory 1030, and the raw pixel data in the image memory 1030 is then provided to the ISP processor 1040 for processing. The image Memory 1030 may be part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the sensor 1020 interface or from the image memory 1030, the ISP processor 1040 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 1030 for additional processing before being displayed. ISP processor 1040 receives processed data from image memory 1030 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 1070 for viewing by a user and/or further processed by a graphics engine or GPU (graphics processing Unit). Further, the output of ISP processor 1040 may also be sent to image memory 1030, and display 1070 may read image data from image memory 1030. In one embodiment, image memory 1030 may be configured to implement one or more frame buffers. Further, the output of the ISP processor 1040 may be transmitted to the encoder/decoder 1060 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on a display 1070 device. The encoder/decoder 1060 may be implemented by a CPU or GPU or coprocessor.
The statistics determined by the ISP processor 1040 may be sent to the control logic 1050 unit. For example, the statistical data may include image sensor 1014 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 1012 shading correction, and the like. Control logic 1050 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 1010 and, in turn, control parameters based on the received statistical data. For example, the control parameters may include sensor 1020 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 1012 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), and lens 1012 shading correction parameters.
The following steps are performed to implement the image processing method using the image processing technique of fig. 12:
controlling a main camera to shoot a first main image of a target scene according to standard exposure parameters, simultaneously controlling a secondary camera to shoot a first secondary image of the target scene, and controlling the main camera to shoot a second main image of the target scene according to overexposure parameters;
calculating first depth of field information of a target image in the target scene according to the first main image and the first auxiliary image through a first thread, simultaneously acquiring a first target image from the first main image through a second thread, and simultaneously acquiring a first background image from the second main image through a third thread;
and synthesizing the first target image and the first background image according to the first depth information of the target image.
To achieve the above embodiments, the present application also proposes a non-transitory computer-readable storage medium in which instructions, when executed by a processor, enable execution of the image processing method as described in the above embodiments.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (8)

1. An image processing method, comprising:
detecting first brightness of an environment where a shooting subject is located in a target scene and second brightness of the shooting subject;
determining that the difference value between the second brightness and the first brightness is greater than or equal to a preset threshold value;
controlling a main camera to shoot a first main image of a target scene according to standard exposure parameters, and simultaneously controlling a secondary camera to shoot a first secondary image of the target scene according to the standard exposure parameters;
controlling a main camera to shoot a second main image of the target scene according to the overexposure parameter;
calculating first depth of field information of a target image in the target scene according to the first main image and the first auxiliary image through a first thread, simultaneously acquiring a first target image from the first main image through a second thread, and simultaneously acquiring a first background image from the second main image through a third thread;
and synthesizing the first target image and the first background image according to the first depth of field information of the target image to obtain a target scene image.
2. The method of claim 1, further comprising:
and blurring the first background image according to the first depth of field information of the target image to obtain the target scene image with a blurred background.
3. The method of claim 1, wherein after the detecting a first brightness of the photographic environment and a second brightness of the target image, further comprising:
if the difference value between the second brightness and the first brightness is smaller than a preset threshold value, determining an exposure parameter according to the difference value;
controlling the main camera to shoot a plurality of groups of main images on the target scene under the exposure parameter environment, and simultaneously controlling the auxiliary camera to shoot a plurality of groups of auxiliary images on the target scene;
acquiring a reference main image from the multiple groups of main images, and acquiring a reference auxiliary image which is shot in the same group with the reference main image from the multiple groups of auxiliary images;
synthesizing and denoising the multiple groups of main images through the first thread to generate a target main image, acquiring a second target image from the target main image, meanwhile, calculating second depth of field information of the target image in the target scene according to the reference main image and the reference auxiliary image through the second thread, and meanwhile, acquiring a second background image from the reference main image through the third thread;
and synthesizing the second background image and the second target image according to the second depth information of the target image.
4. The method of claim 3, further comprising:
and performing blurring processing on the second background image according to the second depth of field information of the target image to obtain the target scene image after background blurring processing.
5. An image processing apparatus characterized by comprising:
the detection module is used for detecting first brightness of the environment where the shooting subject is located and second brightness of the shooting subject in the target scene;
the determining module is used for determining that the difference value between the second brightness and the first brightness is greater than or equal to a preset threshold value;
the shooting module is used for controlling the main camera to shoot a first main image of a target scene according to standard exposure parameters and controlling the auxiliary camera to shoot a first auxiliary image of the target scene according to the standard exposure parameters;
the shooting module is further used for controlling a main camera to shoot a second main image of the target scene according to the overexposure parameter;
the acquisition module is used for calculating first depth of field information of a target image in the target scene according to the first main image and the first auxiliary image through a first thread, acquiring a first target image from the first main image through a second thread, and acquiring a first background image from the second main image through a third thread;
and the processing module is used for synthesizing the first target image and the first background image according to the first depth of field information of the target image to acquire a target scene image.
6. The apparatus of claim 5, wherein the photographing module comprises:
and the blurring unit is used for blurring the first background image according to the first depth-of-field information of the target image to acquire the target scene image with a blurred background.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method as claimed in any one of claims 1 to 4 when executing the program.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 4.
CN201711277634.1A 2017-12-06 2017-12-06 Image processing method and device Active CN108156369B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711277634.1A CN108156369B (en) 2017-12-06 2017-12-06 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711277634.1A CN108156369B (en) 2017-12-06 2017-12-06 Image processing method and device

Publications (2)

Publication Number Publication Date
CN108156369A CN108156369A (en) 2018-06-12
CN108156369B true CN108156369B (en) 2020-03-13

Family

ID=62466064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711277634.1A Active CN108156369B (en) 2017-12-06 2017-12-06 Image processing method and device

Country Status (1)

Country Link
CN (1) CN108156369B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108881730A (en) * 2018-08-06 2018-11-23 成都西纬科技有限公司 Image interfusion method, device, electronic equipment and computer readable storage medium
CN109523456B (en) * 2018-10-31 2023-04-07 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN109803087B (en) * 2018-12-17 2021-03-16 维沃移动通信有限公司 Image generation method and terminal equipment
CN109862269B (en) 2019-02-18 2020-07-31 Oppo广东移动通信有限公司 Image acquisition method and device, electronic equipment and computer readable storage medium
CN110191291B (en) * 2019-06-13 2021-06-25 Oppo广东移动通信有限公司 Image processing method and device based on multi-frame images
CN112606402A (en) * 2020-11-03 2021-04-06 泰州芯源半导体科技有限公司 Product manufacturing platform applying multi-parameter analysis
CN114520880B (en) * 2020-11-18 2023-04-18 华为技术有限公司 Exposure parameter adjusting method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610421A (en) * 2008-06-17 2009-12-23 深圳华为通信技术有限公司 Video communication method, Apparatus and system
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
JP2017143354A (en) * 2016-02-08 2017-08-17 キヤノン株式会社 Image processing apparatus and image processing method
CN107169939A (en) * 2017-05-31 2017-09-15 广东欧珀移动通信有限公司 Image processing method and related product
CN107241559A (en) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 Portrait photographic method, device and picture pick-up device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610421A (en) * 2008-06-17 2009-12-23 深圳华为通信技术有限公司 Video communication method, Apparatus and system
JP2017143354A (en) * 2016-02-08 2017-08-17 キヤノン株式会社 Image processing apparatus and image processing method
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN107169939A (en) * 2017-05-31 2017-09-15 广东欧珀移动通信有限公司 Image processing method and related product
CN107241559A (en) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 Portrait photographic method, device and picture pick-up device

Also Published As

Publication number Publication date
CN108156369A (en) 2018-06-12

Similar Documents

Publication Publication Date Title
CN108055452B (en) Image processing method, device and equipment
CN107948519B (en) Image processing method, device and equipment
CN108024054B (en) Image processing method, device, equipment and storage medium
CN108111749B (en) Image processing method and device
CN108156369B (en) Image processing method and device
KR102293443B1 (en) Image processing method and mobile terminal using dual camera
KR102306304B1 (en) Dual camera-based imaging method and device and storage medium
CN107635102B (en) Method and device for acquiring exposure compensation value of high-dynamic-range image
CN108683862B (en) Imaging control method, imaging control device, electronic equipment and computer-readable storage medium
CN108154514B (en) Image processing method, device and equipment
CN109040609B (en) Exposure control method, exposure control device, electronic equipment and computer-readable storage medium
CN108712608B (en) Terminal equipment shooting method and device
CN107846556B (en) Imaging method, imaging device, mobile terminal and storage medium
CN107948520A (en) Image processing method and device
US20210344826A1 (en) Image Acquisition Method, Electronic Device, andNon-Transitory Computer Readable Storage Medium
KR20200054320A (en) Imaging control method and imaging device
CN108024057B (en) Background blurring processing method, device and equipment
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
CN108616689B (en) Portrait-based high dynamic range image acquisition method, device and equipment
CN108053438B (en) Depth of field acquisition method, device and equipment
CN110166706B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN109040607B (en) Imaging control method, imaging control device, electronic device and computer-readable storage medium
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment
CN110717871A (en) Image processing method, image processing device, storage medium and electronic equipment
CN108052883B (en) User photographing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

GR01 Patent grant
GR01 Patent grant