CN107426502B - Shooting method and device, electronic equipment and storage medium - Google Patents
Shooting method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN107426502B CN107426502B CN201710846509.1A CN201710846509A CN107426502B CN 107426502 B CN107426502 B CN 107426502B CN 201710846509 A CN201710846509 A CN 201710846509A CN 107426502 B CN107426502 B CN 107426502B
- Authority
- CN
- China
- Prior art keywords
- photographer
- portrait
- camera
- background image
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The disclosure relates to a shooting method and device and electronic equipment. The shooting method comprises the following steps: when a shooting instruction is detected, controlling a first camera to shoot an image containing a shot person, and taking the image of the shot person as a background image; determining an overlapping area of a portrait of a photographer in a background image in a framing picture of the second camera; when the synthesis instruction is detected, the human image of the photographer is synthesized in the superposition area of the background image, and the image containing the photographer and the person to be photographed is obtained. The embodiment of the disclosure can be used for group photo with the shot person without adjusting the shooting position of the shot person, thereby improving the shooting convenience and further improving the use experience of the user.
Description
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a shooting method and apparatus, and an electronic device.
Background
Cameras are the most common and important functions in terminals such as mobile phones, tablets or e-readers.
At present, the method for taking a picture by using the terminal comprises the following steps: one member of a team of members takes a picture of the other members, the photographer not appearing in the picture. Then, the team continues to take the picture by another member, and one member is always missing from the final picture taken. Or seeking help from strangers to shoot the group photo of all the members of the team. Further, when the photographer needs to take a picture of the subject of the scope, the photographer cannot take a picture because the subject is not in the same position as the photographer. For example, a star tracker often cannot co-ordinate with a star.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present disclosure provide a shooting method and apparatus, and an electronic device, so as to solve the technical problems in the related art.
According to a first aspect of embodiments of the present disclosure, there is provided a photographing method, the method including:
when a shooting instruction is detected, controlling a first camera to shoot an image containing a shot person, and taking the image of the shot person as a background image;
determining an overlapping area of a portrait about a photographer in the background image in a framing picture of the second camera;
and when a synthesis instruction is detected, synthesizing the portrait of the photographer in the superposition area of the background image to obtain an image containing the photographed person and the photographer.
Optionally, the method further comprises:
determining that the current shooting mode is a group photo mode;
and responding to the group photo mode, switching the first camera to the second camera after detecting that a shooting instruction is preset for a preset time, and calling a preset image processing algorithm to pick up the portrait of the photographer from a framing image of the second camera.
Optionally, the step of determining an overlapping area of the portrait about the photographer in the background image in the viewfinder frame of the second camera includes:
opening a framing picture of the second camera;
determining a portrait of the photographer in a framing picture of the second camera;
closing a framing picture of the second camera and displaying the background image;
and determining the portrait of the photographer to be superposed in the background image and determining the superposed area of the portrait of the photographer in the background image.
Optionally, after the step of determining an overlapping area of the portrait about the photographer in the background image in the finder screen of the second camera, the method further includes:
acquiring parameter values of the background image in a preset range outside the superposition area as suggested parameter values;
and adjusting the portrait of the photographer according to the suggested parameter value and generating at least one suggested image containing the photographed person and the photographer.
Optionally, after the step of controlling the first camera to capture an image including the subject and taking the image of the subject as a background image when the capturing instruction is detected, the method further includes:
when the triggering operation in the region corresponding to the background image is detected, acquiring a current value of a first preset parameter corresponding to the background image and displaying the current value;
and receiving the adjusted target value of the preset parameter, and updating the current value of the preset parameter of the background image to the target value.
Optionally, after the step of determining an overlapping area of the portrait about the photographer in the background image in the finder screen of the second camera, the method further includes:
when the triggering operation is detected in the region corresponding to the portrait of the photographer, acquiring a current value of a second preset parameter corresponding to the portrait of the photographer;
and receiving the adjusted target value of the second preset parameter, and updating the current value of the second preset parameter of the portrait of the photographer to the target value.
Optionally, after the step of determining an overlapping area of the portrait about the photographer in the background image in the finder screen of the second camera, the method further includes:
and when the replacement area instruction is detected, re-determining the superposition area of the portrait in the background image based on the received designated position.
According to a second aspect of the embodiments of the present disclosure, there is provided a photographing apparatus including:
a background image shooting unit configured to control a first camera to shoot an image containing a subject and take the image of the subject as a background image when a shooting instruction is detected;
a superimposition area determination unit configured to determine a superimposition area of a person image about a photographer in the background image in the finder screen of the second camera;
a composite image acquisition unit configured to, when a composite instruction is detected, composite the portrait of the photographer determined by the superimposition area determination unit in a superimposition area of the background image, resulting in an image including the subject and the photographer.
Optionally, the apparatus comprises a mode determination unit; the mode determination unit includes:
a group photo mode determination module configured to determine that a current photographing mode is a group photo mode;
the mode response module is configured to respond to the group photo mode and switch the first camera to the second camera after detecting that a shooting instruction is preset for a duration;
and the photographer portrait matting module is configured to invoke a preset image processing algorithm to matte the portrait of the photographer from the framing image of the second camera.
Optionally, the overlap-and-add region determining unit includes:
a camera opening module configured to open a framing picture of the second camera;
a photographer portrait determination module configured to determine a portrait of the photographer in a finder screen of the second camera;
a background image display module configured to close a framing screen of the second camera and display the background image;
an overlap region determination module configured to determine that the portrait of the photographer is overlapped in the background image and determine an overlap region of the portrait of the photographer in the background image.
Optionally, the overlap-and-add region determining unit further includes:
a suggested parameter value obtaining module configured to obtain a parameter value of a background image in a preset range outside the superimposition area as a suggested parameter value;
a suggested image generation module configured to adjust a portrait of the photographer according to the suggested parameter value and generate at least one suggested image including the subject and the photographer.
Optionally, the background image capturing unit includes:
the first acquisition module is configured to acquire a current value of a first preset parameter corresponding to the background image when a trigger operation is detected in the background image;
the first display module is configured to display the current value of the first preset parameter acquired by the first acquisition module;
a first updating module configured to receive the adjusted target value of the preset parameter and update the current value of the preset parameter of the background image to the target value.
Optionally, the overlap-and-add region determining unit includes:
the second acquisition module is configured to acquire a current value of a second preset parameter corresponding to the portrait of the photographer when a trigger operation is detected in a portrait corresponding area of the photographer;
the second display module is configured to display the current value of the second preset parameter acquired by the second acquisition module;
and the second updating module is configured to receive the adjusted target value of the second preset parameter and update the current value of the second preset parameter of the portrait of the photographer to the target value.
Optionally, the apparatus further comprises a superimposition region re-determination unit; the superimposition-region re-determination unit is configured to re-determine, when an exchange-region instruction is detected, a superimposition region of the portrait in the background image based on the received designated position.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to:
monitoring whether a shooting instruction is received;
when a shooting instruction is detected, controlling a first camera to shoot an image containing a shot person, and taking the image of the shot person as a background image;
determining an overlapping area of a portrait about a photographer in the background image in a framing picture of the second camera;
and when a synthesis instruction is detected, synthesizing the portrait of the photographer in the superposition area of the background image to obtain an image containing the photographed person and the photographer.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the method provided by the embodiment of the disclosure, the image of the shot person acquired by the first camera is used as the background image; then acquiring a portrait of a photographer from a framing picture of the second camera, and superposing the portrait of the photographer to the background image; and finally, synthesizing the portrait of the photographer in the superposition area of the background image according to the synthesis instruction, and obtaining an image containing the photographer and the person to be shot as a final image. Therefore, the shooting position of the photographer is not required to be adjusted by the photographer, the photo can be combined with the photographer, the photographing convenience is improved, and the user experience is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating a photographing method according to an exemplary embodiment.
Fig. 2(a) -2 (d) are schematic diagrams illustrating a shooting mode adjustment scene according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a photographing method according to another exemplary embodiment.
Fig. 4 to 8 are schematic views of application scenarios according to the photographing method shown in fig. 1.
Fig. 9 is a flowchart illustrating a photographing method according to still another exemplary embodiment.
FIG. 10 is a diagram illustrating an overlay area of a portrait with respect to a background image, according to an example embodiment.
Fig. 11 is a schematic diagram illustrating an image of a human figure superimposed with a background image according to an exemplary embodiment.
Fig. 12 is a flowchart illustrating a photographing method according to still another exemplary embodiment.
FIG. 13 is a diagram illustrating preset ranges in accordance with an exemplary embodiment.
Fig. 14 is a flowchart illustrating a photographing method according to still another exemplary embodiment.
Fig. 15 is a flowchart illustrating a photographing method according to still another exemplary embodiment.
Fig. 16 to 17 are schematic views illustrating a first preset parameter adjustment scene of a background image according to an exemplary embodiment.
Fig. 18 is a flowchart illustrating a photographing method according to still another exemplary embodiment.
Fig. 19 to 20 are second preset parameter adjustment scene diagrams of a portrait of a photographer shown according to an exemplary embodiment.
Fig. 21 is a block diagram illustrating a configuration of a photographing apparatus according to an exemplary embodiment.
Fig. 22 is a block diagram illustrating a configuration of a photographing apparatus according to another exemplary embodiment.
Fig. 23 is a block diagram illustrating a configuration of a photographing apparatus according to still another exemplary embodiment.
Fig. 24 is a block diagram illustrating a configuration of a photographing apparatus according to still another exemplary embodiment.
Fig. 25 is a block diagram illustrating a configuration of a photographing apparatus according to still another exemplary embodiment.
Fig. 26 is a block diagram illustrating a configuration of a photographing apparatus according to still another exemplary embodiment.
Fig. 27 is a block diagram illustrating a configuration of a photographing apparatus according to still another exemplary embodiment.
FIG. 28 is a block diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a photographing method according to an exemplary embodiment. The shooting method can be applied to electronic equipment (such as a smart phone, a tablet computer, a digital camera and the like) with a shooting function. As shown in fig. 1, the photographing method includes the following steps S101 to S103:
in step S101, when a shooting instruction is detected, controlling a first camera to shoot an image containing a subject, and taking the image of the subject as a background image;
in step S102, a superimposed area of a person image about the photographer in the background image in the finder screen of the second camera is determined;
in step S103, when the combination command is detected, the person image of the subject is combined with the superimposed area of the background image, and an image including the subject and the subject is obtained.
It should be noted that, in the embodiment of the present disclosure, the shooting instruction is generated according to a trigger operation of a photographer on a smartphone. For example, "click" operation on the display screen, or triggering a "shoot" button, or pressing the volume button "+" (or "-"). One skilled in the art can set up the method according to a specific scenario, and the present disclosure is not limited thereto.
It should be noted that the first camera and the second camera may be two cameras arranged in a back-to-back manner. For example, the first camera is a rear camera and the second camera is a front camera. The first camera and the second camera can be the same camera, at the moment, the first camera indicates that the camera points to a shot person, the second camera indicates that the camera points to the shot person, and the same camera has two different points. Of course, the first camera and the second camera can point to the same position for the same camera, and at the moment, the photographer needs to move to the position where the photographer is located before (at the moment, the smart phone is carried by others and is photographed), and the purpose of group photo can be achieved as well. Those skilled in the art can set up the method according to specific use scenarios, and the present disclosure is not limited thereto.
It should be noted that, on the basis of the first image, a second image (the area of the second image is smaller than that of the first image) is projected onto the first image, that is, the second image is superimposed on a certain region of the first image, that is, a superimposed region, and the final image displayed on the display screen after the superimposition of the second image and the first image includes the whole second image and a part of the first image (the image corresponding to the superimposed region in the first image is covered). In the embodiment of the present disclosure, the first image is referred to as a background image, and the second image corresponds to a portrait of the photographer.
In the embodiment of the present disclosure, the above-mentioned shooting method is described by taking a smart phone as an example, where the smart phone includes a front camera (top left corner in fig. 3) and a rear camera (not shown in the figure). The intelligent mobile phone is provided with a controller, and the controller is used for detecting and processing each part.
In an embodiment of the present disclosure, before using the photographing method shown in fig. 1, it is necessary to set the photographing mode of the smartphone to the group photo mode, and a setting process is shown in fig. 2(a), and includes:
first, the photographer triggers the operation "set" function box, and the controller displays a setting menu as shown in fig. 2(b) when detecting the above-mentioned trigger operation.
Then, the photographer triggers and operates a "shooting mode" function option in the above setting menu, and the controller detects the above triggering operation to display a shooting mode sub-menu, which is shown in fig. 2 (c).
Finally, the photographer triggers operation "group photo mode", and the controller displays the "confirm" and "cancel" function blocks shown in fig. 2(d) according to the above-mentioned triggering operation and continues to detect the trigger operation of the photographer. When the trigger operation of the 'confirm' function block by the photographer is detected, the shooting mode is set to 'group photo mode'.
It can be seen that, when the smartphone enters the photographing mode, as shown in fig. 3, the controller first determines whether the current photographing mode is the group photo mode, and responds to the group photo mode when determining that the current photographing mode is the group photo mode (corresponding to step S1011). The controller then detects whether a shooting instruction is detected (corresponding to step S1012), continues to wait until the shooting instruction is detected when the shooting instruction is not detected, and then the controller starts timing to a preventive period (the timing may also be completed by a timer) (corresponding to step S1013). After timing is finished, the controller switches the first camera to the second camera, and calls a preset image processing algorithm to scratch out the portrait of the photographer from the framing image of the second camera
In practical application, as shown in fig. 4, the smartphone may default that the rear camera is the first camera, and when the photographer triggers and operates the "camera" function box, a view frame of the first camera is displayed on the smartphone display screen. Referring to fig. 5, the photographer takes images of three persons, i.e., the photographer ABC, for their partner. The first camera adjusts the focal length, and generates a focus frame (a dashed frame shown in fig. 5) when it is detected that the subject B has a smiling face or meets the shooting requirement. The photographer triggers and operates the 'shooting' function box according to the prompt of the focusing frame on the display screen or the photographer according to shooting experience, and the display screen generates a shooting instruction according to the triggering operation of the photographer and sends the shooting instruction to the controller. If the photographer triggers operation of the cancel function box, the above process is repeated.
The controller is in real-time communication with the first camera, and when the shooting instruction is detected, the controller controls the first camera to shoot an image containing three persons ABC of the shot object, as shown in FIG. 6, and takes the image of the three persons ABC as a background image.
And when the controller detects that the shooting instruction indicates a timer to start timing, the timing time is preset duration. The preset duration can be set according to the action time of the first camera and the time for storing the background image into a cache or a memory. When the timing reaches the preset time length, the controller switches the rear camera into the front camera, namely, the first camera into the second camera. It can be understood that, in the embodiment of the present disclosure, the photographer may switch from the first camera to the second camera, and the photographer triggers a "camera switching" function box (for example, a camera key at the top of the display area in fig. 5) of the operation display screen.
As shown in fig. 6, a viewfinder image of the front camera is displayed on the display screen at this time. Also, the front camera adjusts the focal length, and generates a focus frame (a dashed frame shown in fig. 7) when it is detected that the subject B has a smiling face or meets the shooting requirement. The photographer triggers and operates the 'group photo' function box according to the prompt of the focusing frame on the display screen or the photographer according to the shooting experience, and the display screen generates a group photo instruction according to the triggering operation of the photographer and sends the group photo instruction to the controller. If the photographer triggers operation of the cancel function box, the above process is repeated.
The controller controls the front camera to shoot the image of the photographer according to the group photo command. Then the controller recalls a preset image processing algorithm to scratch out the portrait of the photographer (used as a reference sign D in the figure 7) from the image of the photographer, then adjusts the size of the portrait of the photographer D to be directly synthesized into a background image to obtain an image containing the photographer ABC and the photographer D as shown in figure 8, and stores a final image.
It should be noted that, in the embodiment of the present disclosure, the preset image processing algorithm refers to an image matting algorithm pre-stored in the smart phone. For example, the image Matting algorithm described above may be Real-Time Alpha Matting. Of course, those skilled in the art can select other image matting algorithms according to a specific scene, and the disclosure is not limited thereto.
In an embodiment of the present disclosure, when switching to the front camera (corresponding to step S1021) as shown in fig. 9, the person image of the photographer D is extracted in real time from the framing picture of the front camera (corresponding to step S1023), and then the framing picture of the front camera is closed and the person image of the photographer D is superimposed on the background image (corresponding to step S1024). As shown in fig. 10, the lower layer image in fig. 10 is a background image, the upper layer image is a person image of a photographer, and a black portion in the background image is a projection of the person image of the photographer in the background image. The black part is an overlapping area of the portrait of the photographer in the background image. The background image and the portrait of the photographer are superimposed and the image is displayed on the display screen (as shown in fig. 11).
The embodiment of the disclosure can directly close the front camera by closing the framing picture of the front camera, and at the moment, the front camera does not collect images, and the controller only reads background images from a cache or a memory. Of course, the display priority of the finder screen of the front camera and the display priority of the background image may be adjusted. The display reads the background image from the cache or the memory according to the display priority for displaying, the front-facing camera continues to collect the image and stores the image in the cache or the memory, and the display screen cannot read and display the image due to the lower display priority. Those skilled in the art can set the usage according to the specific scenario, and the embodiment of the disclosure is not limited.
And then, the display screen generates a group photo instruction according to the trigger operation of the photographer and sends the group photo instruction to the controller. When the synthesis instruction is detected, the controller synthesizes the portrait of the photographer in the superposition area of the background image to obtain the image containing the photographed person and the photographer.
To acquire a plurality of group images, after step S102, as shown in fig. 12, the controller according to an embodiment of the disclosure further acquires parameter values of the background image within a preset range outside the overlap region as suggested parameter values (corresponding to step S1025). The width of the preset range may be set according to a specific scene. Thereafter, the controller adjusts the parameter value corresponding to the person image of the photographer based on the recommended parameter value acquired in step S1025, and generates at least one recommended image including the subject and the photographer (corresponding to step S1026).
As shown in fig. 13, the preset range is a shaded portion of the edge of the overlap region. And after the controller acquires the suggested parameter value, the controller adjusts the parameter value corresponding to the portrait of the photographer, so that the portrait and the background image can be more coordinated. It is understood that the controller can place the portrait at different positions of the background image respectively, and obtain at least one suggested image containing the photographed person and the photographer according to the above adjustment process. And temporarily storing the at least one suggested image in a cache, and displaying the suggested image on the display screen in a rolling manner at a preset frequency until the display screen generates a group photo instruction according to the triggering operation of a photographer and sends the group photo instruction to the controller. The controller synthesizes the portrait in the superposition area of the background image according to the group photo command to obtain an image containing the shot object and the shot object. And deleting the rest unselected suggested images.
In an embodiment of the present disclosure, the updating of the superimposed area on the background image of the portrait of the photographer may be set by the photographer, as shown in fig. 14, including:
a photographer performs trigger operation on a display screen to determine a designated position; the specified position corresponds to a position in the background image. The display then generates a replacement area command based on the designated location and sends it to the controller (corresponding to step S1027). When the replacement area command is detected, the controller newly determines the superimposition area of the portrait of the photographer in the background image based on the designated position (corresponding to step S1028).
In an embodiment of the present disclosure, a photographer may adjust a background area according to a specific scene. As shown in fig. 15, when a photographer performs a trigger operation on the display screen, the trigger operation corresponds to the background image corresponding area, and the display screen generates a corresponding trigger operation instruction according to the trigger operation and sends the trigger operation instruction to the controller. The controller acquires a current value of a first preset parameter corresponding to the background image and displays the current value (corresponding to step S1029). And the photographer selects the current values of one or more first preset parameters according to the displayed current values and adjusts the current values to target values, and the display screen sends the target values to the controller. The controller updates the current value of the corresponding preset parameter to the corresponding target value (corresponding to step S1030).
As shown in fig. 16, when the display screen detects that the photographer triggers the operation of the background image corresponding area, the display screen sends the above-mentioned triggering operation to the controller. The controller obtains a current value of a first preset parameter corresponding to the background image, and the current value is displayed by the display screen. The first preset parameter may include one or more of a position, a size, a brightness, a color level, a hue, a contrast, and a saturation, which may be selected by a person skilled in the art according to a specific scenario, and the disclosure is not limited thereto. For example, as shown in fig. 17, the photographer may adjust the position of the background image so that the background image moves to the left, right, up, or down, which is more harmonious with the photographer's portrait. The center position of the background image is (X, 0; Y, 0), and when moving to the left, the center position thereof may be modified to be (X, -2; Y, 0), at which time the background image moves 2 units to the left. As another example, when the center position is modified to (X, -2; Y, +2), the background image is moved leftward by 2 units and then upward by 2 units. The setting can be performed by those skilled in the art according to specific scenarios. Other first preset parameters may also be adjusted in an increasing or decreasing manner.
And after the photographer modifies the current value of the first preset parameter into the target value, the controller updates the background image according to the target value of the first preset parameter and displays the background image on the display screen again. Of course, the person skilled in the art may adjust the first preset parameter as many times as necessary until the photographer is satisfied.
In an embodiment of the present disclosure, a photographer can adjust an area corresponding to a portrait of the photographer according to a specific scene. As shown in fig. 18, when a photographer performs a trigger operation on the display screen, the trigger operation corresponds to a region corresponding to a portrait of the photographer, and the display screen generates a corresponding trigger operation instruction according to the trigger operation and sends the trigger operation instruction to the controller. The controller acquires and displays a current value of the second preset parameter corresponding to the portrait of the photographer (corresponding to step S1031). And the photographer selects the current values of one or more second preset parameters according to the displayed current values and correspondingly adjusts the current values to the target values, and the display screen sends the target values to the controller. The controller updates the current value of the corresponding preset parameter to the corresponding target value (corresponding to step S1032).
As shown in fig. 19, when the display screen detects that the photographer triggers the operation portrait corresponding area, the display screen sends the above-mentioned triggering operation to the controller. And the controller acquires the current value of the second preset parameter corresponding to the portrait and displays the current value by the display screen. The second preset parameter may include one or more of a position, a size, a brightness, a color level, a hue, a contrast, and a saturation, which may be selected by a person skilled in the art according to a specific scenario, and the disclosure is not limited thereto. For example, as shown in fig. 20, the photographer may adjust the position of the portrait of the photographer to move the portrait to the left, right, up, or down, which is more consistent with the portrait of the background image. The portrait central position is (X, 0; Y, 0), and when moving to the left, the central position can be modified to be (X, -2; Y, 0), and the portrait moves to the left by 2 units. As another example, when the center position is modified to (X, -2; Y, +2), the portrait is moved left by 2 units and then up by 2 units. The setting can be performed by those skilled in the art according to specific scenarios. Other second preset parameters may also be adjusted in an increasing or decreasing manner.
And after the photographer modifies the current value of the second preset parameter into the target value, the controller updates the portrait according to the target value of the second preset parameter and displays the portrait on the display screen again. Of course, the person skilled in the art may adjust the second preset parameter as many times as necessary until the photographer is satisfied.
In practical application, different triggering operation modes can be adopted for modifying the second preset parameter of the portrait or adjusting the designated position of the portrait. For example, when the photographer directly adjusts the position of the portrait, the photographer may first trigger to operate the area corresponding to the portrait, and move the portrait to the designated position in a dragging manner after the portrait is selected. For another example, the photographer first triggers the region corresponding to the portrait for a long time, and then triggers a certain position (i.e., a designated position) of the background image after the portrait is selected, and the controller moves the portrait to the designated position. The embodiment of the present disclosure only illustrates a manner of adjusting the overlapping position of the portrait with respect to the background image, and regarding modifying the background image, the preset parameters of the portrait, and the like, the trigger operation manner may also be adjusted by the photographer according to the usage habit of the photographer, and details are not repeated herein.
Therefore, the method provided by the embodiment of the disclosure can use the image of the photographed person acquired by the first camera as the background image; then acquiring a portrait of a photographer from a framing picture of the second camera, and superposing the portrait of the photographer to the background image; and finally, synthesizing the portrait of the photographer in the superposition area of the background image according to the synthesis instruction, and obtaining an image containing the photographer and the person to be shot as a final image. Therefore, the shooting position of the photographer is not required to be adjusted by the photographer, the photo can be combined with the photographer, the photographing convenience is improved, and the user experience is further improved.
Based on the above shooting method, an embodiment of the present disclosure further provides a shooting apparatus, as shown in fig. 21, including:
a background image capturing unit 2101 configured to, upon detection of a capturing instruction, control a first camera to capture an image containing a subject and take the image of the subject as a background image;
a superimposition area determination unit 2102 configured to determine a superimposition area of a person image about a photographer in the background image in the finder screen of the second camera;
a synthetic image acquisition unit 2103 configured to, when a synthesis instruction is detected, synthesize the portrait of the photographer determined by the superimposition area determination unit 2102 in the superimposition area of the background image, resulting in an image including the subject and the photographer.
In an embodiment of the present disclosure, the above-mentioned shooting device further includes a mode determination unit 2200. As shown in fig. 22, the mode determination unit 2200 includes:
a group photo mode determination module 2201 configured to determine that the current shooting mode is the group photo mode;
a mode response module 2202 configured to switch the first camera to the second camera after detecting a preset duration of a shooting instruction in response to the group photo mode;
the photographer portrait matting module 2203 is configured to invoke a preset image processing algorithm to matte the portrait of the photographer from the viewfinder image of the second camera.
In an embodiment of the present disclosure, as shown in fig. 23, the overlap area determining unit further includes:
a camera opening module 2301 configured to open a framing screen of the second camera;
a photographer portrait determination module 2302 configured to determine a portrait of a photographer in a finder screen of the second camera;
a background image display module 2303 configured to close the finder screen of the second camera and display a background image;
an overlap region determination module 2304 configured to determine that the portrait of the photographer is overlapped in the background image and determine an overlap region of the portrait of the photographer in the background image.
In an embodiment of the present disclosure, as shown in fig. 24, the above-mentioned overlapping area determining unit further includes:
a suggested parameter value obtaining module 2401 configured to obtain a parameter value of a background image in a preset range outside the superimposition area as a suggested parameter value;
a suggested image generating module 2402 configured to adjust the portrait of the photographer according to the suggested parameter value and generate at least one suggested image including the photographer and the photographer.
In an embodiment of the present disclosure, as shown in fig. 25, the background image capturing unit includes:
a first obtaining module 2501, configured to, when a trigger operation is detected in the background image, obtain a current value of a first preset parameter corresponding to the background image;
a first display module 2502, configured to display the current value of the first preset parameter acquired by the first acquisition module;
a first updating module 2503, configured to receive the adjusted target value of the preset parameter, and update the current value of the preset parameter of the background image to the target value.
In an embodiment of the present disclosure, as shown in fig. 26, the overlap area determining unit 2102 further includes:
a second obtaining module 2601, configured to obtain a current value of a second preset parameter corresponding to the portrait of the photographer when a trigger operation is detected in a region corresponding to the portrait of the photographer;
a second display module 2602 configured to display the current value of the second preset parameter acquired by the second acquisition module;
a second updating module 2603 configured to receive the adjusted target value of the second preset parameter and update the current value of the second preset parameter of the portrait of the photographer to the target value.
In an embodiment of the present disclosure, as shown in fig. 27, the above-described photographing apparatus further includes an overlap area re-determination unit 2701. The superimposition-region re-determination unit 2701 described above is configured to re-determine the superimposition region of the portrait of the photographer in the background image based on the received designated position when the replacement region instruction is detected.
With regard to the apparatus in the above-described embodiment, the specific manner in which each unit or module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated herein.
FIG. 28 is a block diagram illustrating an electronic device in accordance with an example embodiment. For example, the electronic device 2800 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 28, electronic device 2800 may include one or more of the following components: processing component 2802, memory 2804, power component 2806, multimedia component 2808, audio component 2810, interface for input/output (I/O) 2812, sensor component 2814, and communications component 2816.
The processing component 2802 generally controls overall operation of the electronic device 2800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 2802 may include one or more processors 2820 to execute instructions. Wherein one of the processors is configured to: monitoring whether a shooting instruction is received; when a shooting instruction is detected, controlling a first camera to shoot an image containing a shot person, and taking the image of the shot person as a background image; determining an overlapping area of a portrait about a photographer in the background image in a framing picture of the second camera; and when a synthesis instruction is detected, synthesizing the portrait of the photographer in the superposition area of the background image to obtain an image containing the photographed person and the photographer.
Further, the processing component 2802 can include one or more modules that facilitate interaction between the processing component 2802 and other components. For example, the processing component 2802 can include a multimedia module to facilitate interaction between the multimedia component 2808 and the processing component 2802.
The memory 2804 is configured to store various types of data to support operation at the electronic device 2800. Examples of such data include instructions for any application or method operating on electronic device 2800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 2804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 2808 includes a screen that provides an output interface between the electronic device 2800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 2808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the back-facing camera may receive external multimedia data when the electronic device 2800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 2810 is configured to output and/or input audio signals. For example, the audio component 2810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 2800 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in memory 2804 or transmitted via communications component 2816. In some embodiments, the audio component 2810 also includes a speaker for outputting audio signals.
I/O interface 2812 provides an interface between processing component 2802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
In an example embodiment, the electronic device 2800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as memory 2804 comprising instructions, executable by processor 2820 of electronic device 2800 is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (12)
1. A photographing method, characterized in that the method comprises:
when the current shooting mode is determined to be the group photo mode and a shooting instruction is detected, controlling a first camera to shoot an image containing a shot person, and taking the image of the shot person as a background image;
switching the first camera to the second camera after detecting a preset duration of a shooting instruction, and calling a preset image processing algorithm to extract a portrait of the photographer from a view-finding image of the second camera;
determining an overlapping area of a portrait about a photographer in the background image in a framing picture of the second camera;
when a synthesis instruction is detected, synthesizing the portrait of the photographer in an overlapping area of the background image to obtain an image containing the photographed person and the photographer;
the step of determining the overlapping area of the portrait about the photographer in the background image in the framing picture of the second camera comprises the following steps:
opening a framing picture of the second camera;
determining a portrait of the photographer in a framing picture of the second camera;
closing a framing picture of the second camera and displaying the background image;
and determining the portrait of the photographer to be superposed in the background image and determining the superposed area of the portrait of the photographer in the background image.
2. The photographing method according to claim 1, wherein after the step of determining an overlapping area of the portrait on the photographer in the background image in the finder screen of the second camera, the method further comprises:
acquiring parameter values of the background image in a preset range outside the superposition area as suggested parameter values;
and adjusting the portrait of the photographer according to the suggested parameter value and generating at least one suggested image containing the photographed person and the photographer.
3. The photographing method according to claim 1, wherein after the step of controlling the first camera to photograph the image including the subject and having the image of the subject as a background image when the photographing instruction is detected, the method further comprises:
when the triggering operation in the region corresponding to the background image is detected, acquiring a current value of a first preset parameter corresponding to the background image and displaying the current value;
and receiving the adjusted target value of the preset parameter, and updating the current value of the preset parameter of the background image to the target value.
4. The photographing method according to claim 1, wherein after the step of determining an overlapping area of the portrait on the photographer in the background image in the finder screen of the second camera, the method further comprises:
when the triggering operation is detected in the region corresponding to the portrait of the photographer, acquiring a current value of a second preset parameter corresponding to the portrait of the photographer;
and receiving the adjusted target value of the second preset parameter, and updating the current value of the second preset parameter of the portrait of the photographer to the target value.
5. The photographing method according to claim 1, wherein after the step of determining an overlapping area of the portrait on the photographer in the background image in the finder screen of the second camera, the method further comprises:
and when the replacement area instruction is detected, re-determining the superposition area of the portrait in the background image based on the received designated position.
6. A camera, characterized in that the camera comprises:
a group photo mode determination module configured to determine that a current photographing mode is a group photo mode;
a background image shooting unit configured to control a first camera to shoot an image containing a subject and take the image of the subject as a background image when a shooting instruction is detected;
the mode response module is configured to respond to the group photo mode and switch the first camera to the second camera after detecting that a shooting instruction is preset for a duration;
the photographer portrait matting module is configured to invoke a preset image processing algorithm to matte the portrait of the photographer from the viewfinder image of the second camera;
a superimposition area determination unit configured to determine a superimposition area of a person image about a photographer in the background image in the finder screen of the second camera;
a synthetic image acquisition unit configured to, when a synthetic instruction is detected, synthesize the portrait of the photographer determined by the superimposition area determination unit in a superimposition area of the background image, resulting in an image including the subject and the photographer;
the superimposition area determination unit includes:
a camera opening module configured to open a framing picture of the second camera;
a photographer portrait determination module configured to determine a portrait of the photographer in a finder screen of the second camera;
a background image display module configured to close a framing screen of the second camera and display the background image;
an overlap region determination module configured to determine that the portrait of the photographer is overlapped in the background image and determine an overlap region of the portrait of the photographer in the background image.
7. The photographing apparatus according to claim 6, wherein the superimposition area determination unit further includes:
a suggested parameter value obtaining module configured to obtain a parameter value of a background image in a preset range outside the superimposition area as a suggested parameter value;
a suggested image generation module configured to adjust a portrait of the photographer according to the suggested parameter value and generate at least one suggested image including the subject and the photographer.
8. The photographing apparatus according to claim 6, wherein the background image photographing unit includes:
the first acquisition module is configured to acquire a current value of a first preset parameter corresponding to the background image when a trigger operation is detected in the background image;
the first display module is configured to display the current value of the first preset parameter acquired by the first acquisition module;
a first updating module configured to receive the adjusted target value of the preset parameter and update the current value of the preset parameter of the background image to the target value.
9. The imaging apparatus according to claim 6, wherein the superimposition area determination unit includes:
the second acquisition module is configured to acquire a current value of a second preset parameter corresponding to the portrait of the photographer when a trigger operation is detected in a portrait corresponding area of the photographer;
the second display module is configured to display the current value of the second preset parameter acquired by the second acquisition module;
and the second updating module is configured to receive the adjusted target value of the second preset parameter and update the current value of the second preset parameter of the portrait of the photographer to the target value.
10. The photographing apparatus according to claim 6, characterized in that the apparatus further includes a superimposition area re-determination unit; the superimposition-region re-determination unit is configured to re-determine, when an exchange-region instruction is detected, a superimposition region of the portrait in the background image based on the received designated position.
11. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing a computer program executed by the processor;
wherein the processor is configured to:
monitoring whether a shooting instruction is received;
when the current shooting mode is determined to be the group photo mode and a shooting instruction is detected, controlling a first camera to shoot an image containing a shot person, and taking the image of the shot person as a background image;
switching the first camera to the second camera after detecting a preset duration of a shooting instruction, and calling a preset image processing algorithm to extract a portrait of the photographer from a view-finding image of the second camera;
determining an overlapping area of a portrait about a photographer in the background image in a framing picture of the second camera;
when a synthesis instruction is detected, synthesizing the portrait of the photographer in an overlapping area of the background image to obtain an image containing the photographed person and the photographer;
the step of determining the overlapping area of the portrait about the photographer in the background image in the framing picture of the second camera comprises the following steps:
opening a framing picture of the second camera;
determining a portrait of the photographer in a framing picture of the second camera;
closing a framing picture of the second camera and displaying the background image;
and determining the portrait of the photographer to be superposed in the background image and determining the superposed area of the portrait of the photographer in the background image.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as claimed in claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710846509.1A CN107426502B (en) | 2017-09-19 | 2017-09-19 | Shooting method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710846509.1A CN107426502B (en) | 2017-09-19 | 2017-09-19 | Shooting method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107426502A CN107426502A (en) | 2017-12-01 |
CN107426502B true CN107426502B (en) | 2020-03-17 |
Family
ID=60432128
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710846509.1A Active CN107426502B (en) | 2017-09-19 | 2017-09-19 | Shooting method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107426502B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107872623B (en) * | 2017-12-22 | 2019-11-26 | 维沃移动通信有限公司 | A kind of image pickup method, mobile terminal and computer readable storage medium |
CN110351495A (en) * | 2018-04-08 | 2019-10-18 | 中兴通讯股份有限公司 | A kind of method, apparatus, equipment and the storage medium of mobile terminal shooting group photo |
WO2020029306A1 (en) * | 2018-08-10 | 2020-02-13 | 华为技术有限公司 | Image capture method and electronic device |
CN111246078A (en) * | 2018-11-29 | 2020-06-05 | 北京小米移动软件有限公司 | Image processing method and device |
JP6559870B1 (en) * | 2018-11-30 | 2019-08-14 | 株式会社ドワンゴ | Movie synthesis apparatus, movie synthesis method, and movie synthesis program |
CN110377259B (en) * | 2019-07-19 | 2023-07-07 | 深圳前海达闼云端智能科技有限公司 | Equipment hiding method, electronic equipment and storage medium |
CN114727000A (en) * | 2021-01-05 | 2022-07-08 | 北京小米移动软件有限公司 | Group photo method, device, terminal equipment and storage medium |
CN112887609B (en) * | 2021-01-27 | 2023-04-07 | 维沃移动通信有限公司 | Shooting method and device, electronic equipment and storage medium |
CN112954221A (en) * | 2021-03-11 | 2021-06-11 | 深圳市几何数字技术服务有限公司 | Method for real-time photo shooting |
CN113810755B (en) * | 2021-09-15 | 2023-09-05 | 北京百度网讯科技有限公司 | Panoramic video preview method and device, electronic equipment and storage medium |
CN117896538A (en) * | 2024-01-09 | 2024-04-16 | 广州开得联软件技术有限公司 | Method and device for patrolling course, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101651767A (en) * | 2008-08-14 | 2010-02-17 | 三星电子株式会社 | Device and method for synchronously synthesizing images |
CN102055834A (en) * | 2009-10-30 | 2011-05-11 | Tcl集团股份有限公司 | Double-camera photographing method of mobile terminal |
CN104954689A (en) * | 2015-06-30 | 2015-09-30 | 努比亚技术有限公司 | Method and shooting device for acquiring photo through double cameras |
CN105391949A (en) * | 2015-10-29 | 2016-03-09 | 深圳市金立通信设备有限公司 | Photographing method and terminal |
CN105578028A (en) * | 2015-07-28 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and terminal |
CN105957045A (en) * | 2016-04-29 | 2016-09-21 | 珠海市魅族科技有限公司 | Picture synthesis method and device |
CN106101525A (en) * | 2016-05-31 | 2016-11-09 | 北京奇虎科技有限公司 | Application call dual camera carries out the method and device shot |
-
2017
- 2017-09-19 CN CN201710846509.1A patent/CN107426502B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101651767A (en) * | 2008-08-14 | 2010-02-17 | 三星电子株式会社 | Device and method for synchronously synthesizing images |
CN102055834A (en) * | 2009-10-30 | 2011-05-11 | Tcl集团股份有限公司 | Double-camera photographing method of mobile terminal |
CN104954689A (en) * | 2015-06-30 | 2015-09-30 | 努比亚技术有限公司 | Method and shooting device for acquiring photo through double cameras |
CN105578028A (en) * | 2015-07-28 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and terminal |
CN105391949A (en) * | 2015-10-29 | 2016-03-09 | 深圳市金立通信设备有限公司 | Photographing method and terminal |
CN105957045A (en) * | 2016-04-29 | 2016-09-21 | 珠海市魅族科技有限公司 | Picture synthesis method and device |
CN106101525A (en) * | 2016-05-31 | 2016-11-09 | 北京奇虎科技有限公司 | Application call dual camera carries out the method and device shot |
Also Published As
Publication number | Publication date |
---|---|
CN107426502A (en) | 2017-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107426502B (en) | Shooting method and device, electronic equipment and storage medium | |
EP3010226B1 (en) | Method and apparatus for obtaining photograph | |
CN106572299B (en) | Camera opening method and device | |
EP3249509A1 (en) | Method and device for playing live videos | |
CN105282441B (en) | Photographing method and device | |
CN105631804B (en) | Image processing method and device | |
EP3945494A1 (en) | Video processing method, apparatus and storage medium | |
CN111586296B (en) | Image capturing method, image capturing apparatus, and storage medium | |
EP3945490B1 (en) | Method and device for processing video, and storage medium | |
CN107426489A (en) | Processing method, device and terminal during shooting image | |
CN113364965A (en) | Shooting method and device based on multiple cameras and electronic equipment | |
CN110995993B (en) | Star track video shooting method, star track video shooting device and storage medium | |
CN115134505B (en) | Preview picture generation method and device, electronic equipment and storage medium | |
CN114339022A (en) | Camera shooting parameter determining method and neural network model training method | |
CN111355879B (en) | Image acquisition method and device containing special effect pattern and electronic equipment | |
CN114079724B (en) | Taking-off snapshot method, device and storage medium | |
CN113315903B (en) | Image acquisition method and device, electronic equipment and storage medium | |
CN116939351A (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN111835977B (en) | Image sensor, image generation method and device, electronic device, and storage medium | |
CN114697517A (en) | Video processing method and device, terminal equipment and storage medium | |
CN114697515A (en) | Method and device for collecting image and readable storage medium | |
CN114943791A (en) | Animation playing method, device, equipment and storage medium | |
CN114189622A (en) | Image shooting method and device, electronic equipment and storage medium | |
CN112346606A (en) | Picture processing method and device and storage medium | |
CN108206910B (en) | Image shooting method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |