Nothing Special   »   [go: up one dir, main page]

CN111757007B - Image shooting method, device, terminal and storage medium - Google Patents

Image shooting method, device, terminal and storage medium Download PDF

Info

Publication number
CN111757007B
CN111757007B CN202010659336.4A CN202010659336A CN111757007B CN 111757007 B CN111757007 B CN 111757007B CN 202010659336 A CN202010659336 A CN 202010659336A CN 111757007 B CN111757007 B CN 111757007B
Authority
CN
China
Prior art keywords
shooting
terminal
instruction
auxiliary equipment
voice control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010659336.4A
Other languages
Chinese (zh)
Other versions
CN111757007A (en
Inventor
张童飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010659336.4A priority Critical patent/CN111757007B/en
Publication of CN111757007A publication Critical patent/CN111757007A/en
Priority to PCT/CN2021/095541 priority patent/WO2022007518A1/en
Application granted granted Critical
Publication of CN111757007B publication Critical patent/CN111757007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses an image shooting method, an image shooting device, a terminal and a storage medium, and belongs to the technical field of terminals. The method comprises the following steps: receiving a voice control instruction in a shooting scene; identifying the voice control instruction, and determining a control object indicated by the voice control instruction, wherein the control object is a terminal or a shooting auxiliary device; responding to the control object as the shooting auxiliary equipment, and generating an operation instruction according to the voice control instruction, wherein the operation instruction conforms to the instruction format of the shooting auxiliary equipment; and sending an operation instruction to the shooting auxiliary equipment through the communication connection with the shooting auxiliary equipment, wherein the shooting auxiliary equipment is used for executing corresponding operation according to the operation instruction. The embodiment of the application realizes the voice control of the shooting auxiliary equipment, and the voice interaction component does not need to be arranged in the shooting auxiliary equipment, so that the realization cost of the voice control shooting auxiliary equipment is reduced.

Description

Image shooting method, device, terminal and storage medium
Technical Field
The embodiment of the application relates to the technical field of terminals, in particular to an image shooting method, an image shooting device, a terminal and a storage medium.
Background
The shooting function is one of the functions with higher utilization rate in the terminal, and is convenient for a user to shoot a picture or record a video by using the terminal at any time and any place.
In general, when a user uses a terminal to shoot, the user needs to hold the terminal by hand, and the terminal is controlled to shoot by clicking a physical key or a virtual key. In order to realize shooting at some special angles, the user can also use the shooting auxiliary equipment to assist the terminal in shooting. For example, the user can fix the terminal on from rapping bar (a kind of shooting auxiliary assembly) to control the terminal and establish the connection with the communication subassembly (such as bluetooth) from rapping bar, in the shooting process, the user can trigger the communication subassembly from rapping bar and send the shooting instruction to the terminal through clicking the physics button on the selfie bar, thereby realizes that the image shoots.
Disclosure of Invention
The embodiment of the application provides an image shooting method, an image shooting device, a terminal and a storage medium. The technical scheme is as follows:
on one hand, the embodiment of the application provides an image shooting method, which is used for a terminal with a shooting function, wherein a communication connection is established between the terminal and a shooting auxiliary device, and the shooting auxiliary device is used for assisting the terminal in shooting;
the method comprises the following steps:
receiving a voice control instruction in a shooting scene;
identifying the voice control instruction, and determining a control object indicated by the voice control instruction, wherein the control object is the terminal or the shooting auxiliary equipment;
responding to the control object as the shooting auxiliary equipment, and generating an operation instruction according to the voice control instruction, wherein the operation instruction conforms to the instruction format of the shooting auxiliary equipment;
and sending the operation instruction to the shooting auxiliary equipment through communication connection with the shooting auxiliary equipment, wherein the shooting auxiliary equipment is used for executing corresponding operation according to the operation instruction.
On the other hand, the embodiment of the application provides an image shooting device, which is used for a terminal with a shooting function, wherein a communication connection is established between the terminal and a shooting auxiliary device, and the shooting auxiliary device is used for assisting the terminal in shooting;
the device comprises:
the voice command receiving module is used for receiving a voice control command in a shooting scene;
the object determining module is used for identifying the voice control instruction and determining a control object indicated by the voice control instruction, wherein the control object is the terminal or the shooting auxiliary equipment;
the operation instruction generating module is used for responding to the control object as the shooting auxiliary equipment and generating an operation instruction according to the voice control instruction, wherein the operation instruction conforms to the instruction format of the shooting auxiliary equipment;
and the operation instruction sending module is used for sending the operation instruction to the shooting auxiliary equipment through communication connection with the shooting auxiliary equipment, and the shooting auxiliary equipment is used for executing corresponding operation according to the operation instruction.
In another aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor and a memory; the memory stores at least one instruction for execution by the processor to implement the image capture method of the above aspect.
On the other hand, the embodiment of the application provides a shooting control system, which comprises a terminal and shooting auxiliary equipment, wherein the shooting auxiliary equipment is used for assisting the terminal in shooting;
a communication connection is established between the terminal and the shooting auxiliary equipment;
the terminal comprises a terminal as described in the above aspect.
In another aspect, the present application provides a computer-readable storage medium storing at least one instruction for execution by a processor to implement the image capturing method according to the above aspect.
In another aspect, embodiments of the present application provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the image capturing method provided by the above aspect.
The technical scheme provided by the embodiment of the application at least comprises the following beneficial effects:
by adopting the method provided by the embodiment of the application, when the auxiliary terminal of the shooting auxiliary equipment is used for shooting images, a user can send a voice control instruction to the terminal in a voice control mode, the terminal determines that the voice control instruction is used for controlling the terminal or the shooting auxiliary equipment, and when the voice control instruction is determined to be used for controlling the shooting auxiliary equipment, the generated operation instruction is further sent to the shooting auxiliary equipment, and the shooting control equipment executes corresponding operation, so that the voice control of the shooting control equipment is realized; moreover, because the recognition of the voice control instruction and the generation of the operation instruction are realized by the terminal, the shooting auxiliary equipment only needs to respond to the operation instruction, so that the shooting auxiliary equipment does not need to be provided with an additional voice interaction component, and the realization cost of the voice control shooting auxiliary equipment is reduced.
Drawings
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 illustrates a method flow diagram of an image capture method provided by an exemplary embodiment of the present application;
FIG. 3 illustrates a method flow diagram of an image capture method provided by another exemplary embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an implementation of an image capture process provided by an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating an implementation of an image capture process according to another exemplary embodiment;
FIG. 6 illustrates a method flow diagram of an image capture method provided by another exemplary embodiment of the present application;
FIG. 7 is a schematic diagram illustrating an implementation of an image capture process according to another exemplary embodiment;
fig. 8 is a block diagram showing a configuration of an image capturing apparatus according to an embodiment of the present application;
fig. 9 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment of the present application;
fig. 10 shows a system architecture diagram of a photographing control system provided in an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Referring to fig. 1, a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application is shown, where the implementation environment includes a terminal 110 and a shooting assistance device 120.
The terminal 110 is an electronic device with a shooting function, and may be a smart phone, a tablet computer, a digital player, and so on, and the terminal 110 is illustrated as a smart phone in fig. 1 as an example. Optionally, the terminal 110 is provided with a front camera assembly and a rear camera assembly, wherein the front camera assembly is located on one side of a terminal screen (generally used for self-shooting), and the rear camera assembly is located on one side of a terminal back cover.
In addition, the terminal 110 in the embodiment of the present application further has voice acquisition and voice recognition functions. Through the voice collection function, the terminal 110 may receive a voice control instruction of a user, and through the voice recognition function, the terminal 110 may recognize a control intention expressed by the voice control instruction. Optionally, the voice recognition function is implemented locally by the terminal 110, or implemented by the terminal 110 by using a server, which is not limited in this embodiment.
The shooting assistance device 120 is a device for assisting the terminal to shoot, and may be a selfie stick, a cradle head, a terminal support, and the like, and the shooting assistance device 120 is illustrated as a cradle head in fig. 1 as an example. When the terminal 110 is assisted by the shooting assistance device 120 to shoot, the terminal 110 and the shooting assistance device can be fixedly connected through a fixing component (such as a clamp, a suction cup, and the like) on the shooting assistance device 120.
In the embodiment of the present application, the auxiliary photographing device 120 further has a mechanical structure by which the auxiliary photographing device 120 can change the position or posture of the terminal 110, thereby changing the photographing effect of the terminal 110. For example, the photographing assistant device 120 is provided with a rotating mechanical structure by which the photographing assistant device can change a left and right photographing angle or a photographing pitch angle of the terminal; alternatively, the photographing assistant device 120 is provided with a guide mechanical structure by which the photographing assistant device can change a relative distance between the terminal and the photographing object. The embodiment of the present application does not limit the specific mechanical structure of the shooting assistance apparatus.
In order to achieve the mutual communication between the terminal 110 and the auxiliary shooting device 120, the terminal 110 and the auxiliary shooting device 120 establish a communication connection, which may be a wired communication connection or a wireless communication connection.
The wired communication connection may be a Universal Serial Bus (USB) connection; the Wireless Communication connection may be a bluetooth connection, a Wireless Fidelity (WiFi) connection, or an NFC (Near Field Communication) connection, which is not limited in this embodiment.
In the shooting scene, the terminal 110 and the shooting auxiliary device 120 establish a communication connection therebetween, and start a voice control function. When receiving a voice control command from a user, the terminal 110 determines an object (terminal or shooting assistance device) to be controlled by the voice control command through a voice recognition function. When the voice control instruction is used to control the auxiliary shooting device 120, the terminal 110 generates an operation instruction that can be recognized by the auxiliary shooting device 120 according to the voice control instruction, and sends the operation instruction to the auxiliary shooting device 120 through the communication connection, and the auxiliary shooting device 120 responds to the operation instruction to perform corresponding operations.
For example, when the voice control instruction is used to instruct the shooting assistance device 120 to adjust the shooting angle of the terminal 110 through the rotating mechanical structure, the terminal 110 generates a rotation operation instruction according to the voice control instruction and sends the rotation operation instruction to the shooting assistance device 120, and the shooting assistance device 120 controls the rotating mechanical structure to operate according to the rotation direction and the rotation angle indicated by the rotation operation instruction, so as to achieve the effect of changing the shooting angle of the terminal.
Referring to fig. 2, a flowchart of a method of an image capturing method according to an exemplary embodiment of the present application is shown, where the method is described as being applied to the terminal 110 shown in fig. 1, and the method may include the following steps.
Step 201, receiving a voice control instruction in a shooting scene.
In a possible implementation, in the shooting scenario, when the voice assistant function is awakened, the terminal receives the voice control instruction through the microphone. Wherein, the voice assistant function can be awakened by a preset awakening instruction. Optionally, the subsequent control process of the shooting assistance device is implemented by the voice assistant function.
And 202, identifying the voice control command, and determining a control object indicated by the voice control command, wherein the control object is a terminal or a shooting auxiliary device.
In the embodiment of the application, in a shooting scene, a user can initiate voice control on any object in a terminal or a shooting auxiliary device, and therefore after receiving a voice control instruction, the terminal needs to determine a control object indicated by the voice control instruction.
In some embodiments, the terminal converts the voice control instruction into a text instruction, and recognizes the text instruction, thereby determining the control object according to the recognition result. The terminal may recognize the word instruction through a Natural Language Processing (NLP) model to obtain a recognition result.
In other embodiments, the terminal may upload the converted text instruction to the server, and the server identifies the text instruction and feeds back the identification result to the terminal, which is not limited in this embodiment.
It should be noted that, when the terminal is connected to a plurality of shooting auxiliary devices that implement different auxiliary functions, the terminal may determine, according to the recognition result, a specific shooting auxiliary device to be controlled by the voice control instruction from the plurality of shooting auxiliary devices, which is not limited in this embodiment.
And step 203, responding to the control object as the shooting auxiliary equipment, and generating an operation instruction according to the voice control instruction, wherein the operation instruction conforms to the instruction format of the shooting auxiliary equipment.
In a possible implementation manner, when the voice control instruction is used for controlling the shooting auxiliary device, since the shooting auxiliary device cannot directly recognize and respond to the voice control instruction, in order to realize the control of the shooting auxiliary device, the terminal needs to generate an operation instruction which conforms to an instruction format supported by the shooting auxiliary device based on the voice control instruction.
Optionally, the operation instruction includes a function controlled by the voice control instruction and an adjustment amount corresponding to the function. For example, when the function controlled by the voice control command is a shooting angle adjustment function, the operation command includes an adjustment direction and an adjustment angle of the shooting angle.
And step 204, sending an operation instruction to the shooting auxiliary equipment through the communication connection with the shooting auxiliary equipment, wherein the shooting auxiliary equipment is used for executing corresponding operation according to the operation instruction.
Further, the terminal sends the operation instruction to the shooting auxiliary equipment through the communication connection, so that the shooting auxiliary equipment executes corresponding operation according to the operation instruction.
When the voice control instruction is used to control the terminal, the terminal adjusts the shooting parameters of the terminal according to the shooting parameters indicated by the voice control instruction and the parameter adjustment amount corresponding to the shooting parameters. For example, when the voice control instruction indicates that the adjusted shooting parameter is an aperture, the terminal determines an aperture adjustment amount according to the voice control instruction, so that the aperture parameter is adjusted on the basis of the current aperture size.
Besides, the terminal controls the shooting auxiliary device according to the voice control instruction, the shooting auxiliary device can also control the terminal (through communication connection), for example, a user can click a physical key on the shooting auxiliary device to control the terminal to perform shooting operation.
Compared with the prior art, the terminal can only send a simple shooting instruction to the terminal by the shooting auxiliary equipment, so that rapid shooting is realized without directly controlling the terminal, in the embodiment of the application, the terminal can recognize the control requirement of a user on the shooting auxiliary equipment by means of voice and intention recognition functions, and then send a corresponding operation instruction to the shooting auxiliary equipment, so that voice control on the shooting auxiliary equipment is realized, and the efficiency and convenience for controlling the shooting auxiliary equipment when the shooting auxiliary equipment auxiliary terminal is used for shooting are improved; moreover, voice recognition is performed by the terminal, an additional voice recognition component is not required to be arranged in the shooting auxiliary equipment, and the cost of realizing voice control by the shooting auxiliary equipment is reduced.
To sum up, in the embodiment of the present application, when the auxiliary terminal of the shooting auxiliary device is used for shooting an image, a user may send a voice control instruction to the terminal in a voice control manner, the terminal determines that the voice control instruction is used for controlling the terminal or the shooting auxiliary device, and when the voice control instruction is determined to be used for controlling the shooting auxiliary device, the generated operation instruction is further sent to the shooting auxiliary device, and the shooting control device executes a corresponding operation, so that voice control over the shooting control device is implemented; moreover, because the recognition of the voice control instruction and the generation of the operation instruction are realized by the terminal, the shooting auxiliary equipment only needs to respond to the operation instruction, so that the shooting auxiliary equipment does not need to be provided with an additional voice interaction component, and the realization cost of the voice control shooting auxiliary equipment is reduced.
In general, when a user adjusts through the voice control terminal and the shooting auxiliary device, the adjustment scale is not explicitly indicated in the voice control instruction, for example, when the shooting angle of the terminal needs to be adjusted through the shooting auxiliary device, the voice control instruction sent by the user is "the lens is slightly shifted to the left", rather than "the lens is shifted to the left by 15 °. In order to make the operation performed by the shooting auxiliary equipment according to the operation instruction meet the user's desire, the terminal needs to determine the adjustment amount of the target function in addition to the target function to be controlled based on the voice control instruction, and then indicates the target function through the operation instruction. The following description will be made using exemplary embodiments.
Referring to fig. 3, a flowchart of a method of an image capturing method according to another exemplary embodiment of the present application is shown, where the method is described as being applied to the terminal 110 shown in fig. 1, and the method may include the following steps.
Step 301, receiving a voice control instruction in a shooting scene.
The step 201 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
And step 302, identifying the voice control command and determining a target function controlled by the voice control command.
In a possible implementation manner, after the terminal converts the voice control instruction into a text instruction, the text instruction is input into a pre-trained NLP model, so as to obtain probabilities corresponding to candidate functions output by the NLP model, and thus the candidate function corresponding to the highest probability is determined as a target function controlled by the voice control instruction, where the candidate functions include functions supported by the terminal in a shooting scene (such as a contrast adjusting function, a flash opening and closing function, an aperture size adjusting function, and the like) and functions supported by the shooting auxiliary device (such as adjusting left and right shooting angles, adjusting a shooting pitch angle, and adjusting a front and back position); moreover, the NLP model is obtained by training according to the sample word instruction containing the functional label, and the training process of the NLP model is not limited in the embodiment of the application.
Of course, the terminal may use other neural network models for identification besides using the NLP model, and this embodiment of the present application is not limited to this.
In an illustrative example, the terminal recognizes the voice control command 'angle is adjusted a little to the left' through the NLP model, and determines that the target function controlled by the voice control command is 'left and right shooting angle adjustment function'.
Further, the terminal compares the determined target object with a first function corresponding to the terminal and a second function corresponding to the shooting auxiliary equipment, so that a control object of the voice control instruction is determined.
And 303, responding to the target function belonging to the first function corresponding to the terminal, and determining that the control object is the terminal.
In one possible embodiment, the terminal stores a first function (set) corresponding to the terminal, and when the target function belongs to the first function, the control object is determined to be the terminal.
In one illustrative example, the first functionality associated with the terminal includes: and when the identification result indicates that the target function to be controlled is the 'focal length adjustment function', the terminal determines that the control object is the terminal.
And step 304, in response to that the target function belongs to a second function corresponding to the shooting auxiliary equipment, determining that the control object is the shooting auxiliary equipment.
In one possible embodiment, the terminal stores a second function (set) corresponding to the shooting assistance device, and when the target function belongs to the second function, the control object is determined to be the shooting assistance device.
Optionally, the second function is that after the terminal establishes a communication connection with the shooting assistance device, the shooting assistance device sends the shooting assistance device to the terminal through the communication connection, and stores the shooting assistance device in the terminal.
In an illustrative example, the corresponding second function of the shooting assistance apparatus includes: the terminal determines that the control object is the shooting auxiliary equipment when the recognition result indicates that the target function of the control is the left-right shooting angle adjusting function.
And step 305, responding to the control object being the shooting auxiliary equipment, and determining the target function controlled by the voice control instruction.
And when the control object is the shooting auxiliary equipment, the terminal further generates an operation instruction according to the target function controlled by the voice control instruction and the adjustment amount of the target function. Wherein, the target function can be obtained in the identification process.
Step 306, in response to the first adjustment amount corresponding to the target function included in the voice control instruction, generating an operation instruction according to the target function and the first adjustment amount.
In a possible implementation manner, the terminal detects whether the voice control command includes an explicit adjustment amount (i.e. a first adjustment amount) of the target function, and if so, the terminal generates an operation command according to the target function and the first adjustment amount. In some embodiments, the operation instruction includes a function identifier of the target function and the first adjustment amount.
Optionally, the terminal determines an adjustment amount keyword corresponding to the adjustment amount of the target function according to the determined target function, so as to determine whether the voice control instruction includes the first adjustment amount according to the adjustment amount keyword. For example, when the target function is a "left-right shooting angle adjustment function", the adjustment amount keyword may be degree.
In an illustrative example, as shown in fig. 4, the voice control command 42 received by the terminal 41 is "turn the camera to the left by 15 degrees", by recognizing the voice control command 42, the terminal 41 determines that the control target of the voice control command 42 is the photographing assisting device 43, determines that the target function controlled by the voice control command 42 is the "left and right photographing angle adjusting function", and extracts that the first adjustment amount of the target function is 15 degrees, thereby generating an operation command and transmitting the operation command to the photographing assisting device 43. After receiving the operation command, the shooting assistance device 43 performs an operation according to the operation command, and rotates the terminal 41 by 15 degrees to the left by rotating the mechanical structure.
And 307, performing image recognition on the framing picture in response to the fact that the voice control instruction does not contain the first adjustment amount corresponding to the target function, and obtaining an image recognition result.
In most cases, the user does not explicitly specify the amount of adjustment in the voice control command, but rather employs words such as "turn left slightly," "move forward slightly," and so forth. Since the shooting auxiliary device needs to perform corresponding operations according to the clear adjustment amount, when the voice control instruction does not include the first adjustment amount, the terminal needs to determine the clear adjustment amount.
In general, when a user views a viewfinder picture of the terminal and the display effect of a shooting object in the viewfinder picture does not meet the user's expectation, the user issues a voice control instruction, so that in one possible embodiment, the terminal determines the adjustment amount of the target function based on the image recognition result of the viewfinder picture and the target function controlled by the voice control instruction.
In some embodiments, the terminal identifies the photographic subject in the view through the image identification model, and further extracts parameters (including position, size, angle and the like) of the photographic subject in the view, so as to obtain an image identification result containing the photographic subject and the parameters of the subject.
And 308, determining a second adjustment amount corresponding to the target function according to the image recognition result.
Further, based on the obtained image recognition result, the terminal determines an adjustment amount that meets the user's control intention. In one possible embodiment, this step may include the following steps.
And 308A, determining original object parameters of the shot object in the view frame according to the image recognition result, and determining target object parameters of the shot object in the view frame according to the target function, wherein the object parameters comprise at least one of size parameters, position parameters or angle parameters.
Optionally, in this embodiment of the application, an image recognition result obtained by performing image recognition on the view frame includes an original object parameter of the photographic object, where the original object parameter is used to indicate at least one of a size, a position, and an angle of the photographic object in the current view frame. For example, the original object parameters include 100px × 200px (size parameter), (200px, 560px) (position parameter), and 10 ° (angle parameter).
Accordingly, the photographing assistant apparatus can change at least one of the size, position, and angle of the photographic subject in the through-view screen by performing the operation.
In order to achieve the shooting effect meeting the user's expectation, in one possible implementation, the terminal performs image recognition on the shot images in the album in advance, so as to construct the corresponding relationship among the shooting scene, the shooting object and the object parameter according to the image recognition result. Schematically, the correspondence is shown in table one.
Watch 1
Shooting scene Shooting object Object parameters
Multi-person combined racket Portrait of human A size parameter A1, a position parameter B1, and an angle parameter C1
Single-person landscape self-timer Portrait of human A size parameter A2, a position parameter B2, and an angle parameter C2
Correspondingly, when the adjustment amount of the target function is determined, the terminal searches the object parameters from the corresponding relation according to the current shooting object and the current shooting scene, and then determines the parameters corresponding to the target function in the searched object parameters as the target object parameters.
For example, the terminal finds out the object parameters including the size parameter a1, the position parameter B1 and the angle parameter C1 according to the current shooting scene "multi-person taking" and the shooting object "portrait", and when the target function controlled by the voice control command is the "left-right shooting angle adjustment function", the terminal determines the angle parameter C1 as the target object parameter.
Of course, in addition to performing image recognition on the local captured image to construct the above correspondence, the terminal may also obtain the correspondence from the server (obtained by analyzing a large number of images by the server), which is not limited in this embodiment.
And 308B, determining a second adjustment amount according to the target object parameter and the original object parameter, wherein after the shooting auxiliary equipment adjusts the target function according to the second adjustment amount, the object parameter of the target object in the view finding picture is the target object parameter.
Further, the terminal determines a second adjustment amount required to be adjusted when the object parameter of the shooting object in the framing picture is converted into the target object parameter according to the difference value between the target object parameter and the original object parameter, so as to generate an operation instruction based on the second adjustment amount.
In a possible implementation manner, the terminal is provided with a mapping relationship between the parameter difference and the adjustment amount, and when determining the second adjustment amount of the target function, the terminal determines the parameter difference between the target object parameter and the original object parameter, so as to determine the second adjustment amount corresponding to the target function according to the mapping relationship between the parameter difference and the adjustment amount, where the adjustment amount and the parameter difference have a positive correlation.
In some embodiments, the mapping between different parameters and adjustment amounts is different. For example, the mapping relationship between the angle parameter and the adjustment amount can be expressed as: and a is bx + c, wherein a is an adjustment amount, b and c are preset coefficients, and x is an angle parameter.
Step 309, generating an operation instruction according to the target function and the second adjustment amount.
Similar to the process of generating the operation instruction in step 306, the terminal generates the operation instruction according to the target function and the second adjustment amount.
In an exemplary example, as shown in fig. 5, the terminal 51 receives a voice control command 52 "turn the camera to the left a little bit", and by recognizing the voice control command 52, the terminal 51 determines that the control target of the voice control command 52 is the photographing assisting device 53, and determines that the target function controlled by the voice control command 52 is the "left and right photographing angle adjusting function".
Since the control command 52 does not include an explicit angle adjustment amount in the voice, the terminal 51 performs image recognition on the current viewfinder frame 511, determines an original object parameter of the photographic subject according to the image recognition result, determines a second adjustment amount (for example, 15 degrees) of the angle parameter according to the target object parameter and the original object parameter of the photographic subject, and generates an operation command according to the second adjustment amount and the "left and right photographic angle adjustment function". After receiving the operation instruction, the shooting assistance device 53 rotates the terminal 51 by 15 degrees to the left through the rotating mechanism.
And 310, sending an operation instruction to the shooting auxiliary equipment through the communication connection with the shooting auxiliary equipment, wherein the shooting auxiliary equipment is used for executing corresponding operation according to the operation instruction.
The step 204 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
In this embodiment, the terminal identifies the voice control instruction, determines the target function controlled by the voice control instruction, and further determines the control object according to the target function, so that the accuracy of the determined control object is improved.
In addition, in the embodiment, when the voice control instruction contains a clear adjustment amount, the terminal generates an operation instruction according to the adjustment amount and the target function; when the voice control instruction does not contain a clear adjustment amount, the terminal identifies the view-finding picture through images, so that the adjustment amount of the target function is determined based on the image identification result, an operation instruction is generated, and the intelligent degree of voice control is improved.
Since the adjustment range of the photographing auxiliary device is limited (for example, only 180-degree rotation can be achieved), in the process of generating the operation instruction based on the adjustment amount, the terminal needs to determine whether the photographing auxiliary device can complete the adjustment of the target function based on the determined adjustment amount. In some embodiments, based on fig. 3, as shown in fig. 6, step 305 further includes step 311, step 306 may be replaced by step 3061, and step 309 may be replaced by step 3091.
And 311, determining an upper limit of an adjustment amount corresponding to the target function in the current state according to the state information of the shooting auxiliary equipment.
In a possible implementation manner, after the target function controlled by the voice control instruction is determined, the terminal acquires the state information from the shooting auxiliary device through communication connection, so that the upper limit of the adjustment amount of the target function in the current state is determined according to the state information.
Optionally, the state information includes current function parameters of each function supported by the shooting assistance device. For example, the status information includes a current pan angle, a current pan tilt angle, a current fore-and-aft position, and the like of the shooting assistance apparatus.
Further, the terminal determines the upper limit of the adjustment amount of the target function according to the upper limit of the function parameter of the target function and the current function parameter. Wherein, the upper limit of the adjustment amount is the upper limit of the function parameter-the current function parameter.
In an illustrative example, when the target function is a left-right photographing angle adjusting function, the terminal determines that the upper limit of the adjustment amount is 60 degrees according to the upper limit of the function parameter of the left-right photographing angle adjusting function of 180 degrees and the current function parameter of 120 degrees.
Step 3061, in response to the first adjustment amount corresponding to the target function included in the voice control instruction and the first adjustment amount corresponding to the target function being less than or equal to the upper limit of the adjustment amount, generating an operation instruction according to the target function and the first adjustment amount.
When the voice control instruction comprises a first adjustment amount corresponding to the target function, the terminal detects whether the first adjustment amount is smaller than or equal to the upper limit of the adjustment amount, and if the first adjustment amount is smaller than or equal to the upper limit of the adjustment amount, a corresponding operation instruction is generated.
Optionally, if the first adjustment amount is greater than the adjustment amount upper limit, the terminal performs an adjustment prompt in a predetermined manner. The predetermined mode includes but is not limited to a text mode, a voice mode and an animation mode. For example, the terminal prompts the user to manually adjust the shooting auxiliary device in a text manner.
Step 3091, in response to that the second adjustment amount corresponding to the target function is less than or equal to the upper limit of the adjustment amount, generating an operation instruction according to the target function and the second adjustment amount.
And when determining a second adjustment amount corresponding to the target function, the terminal detects whether the second adjustment amount is less than or equal to the upper limit of the adjustment amount, and if the second adjustment amount is less than or equal to the upper limit of the adjustment amount, a corresponding operation instruction is generated.
Optionally, if the second adjustment amount is greater than the upper limit of the adjustment amount, the terminal performs an adjustment prompt in a predetermined manner. The predetermined mode includes but is not limited to a text mode, a voice mode and an animation mode. For example, the terminal prompts the user to manually adjust the shooting auxiliary device in a text manner.
When the voice control instruction does not contain a clear adjustment amount, the terminal can instruct the shooting auxiliary equipment to gradually adjust according to a certain adjustment amount besides determining the adjustment amount by adopting the image recognition mode, and after the shooting auxiliary equipment adjusts the terminal to a state expected by a user, the user can control the shooting auxiliary equipment to stop adjusting through voice. In a possible implementation manner, after the step 305, in response to that the voice control instruction does not include the first adjustment amount corresponding to the target function, the terminal generates an operation instruction according to the target function and an adjustment rate corresponding to the target function, where the adjustment rate is used to represent the adjustment amount corresponding to the target function in the unit time length.
The adjusting rate can be set by the terminal, or can be set by the shooting auxiliary equipment and sent to the terminal. For example, when the target function is a "left-right photographing angle adjusting function", the adjustment rate may be 5 degrees/second, and when the target function is a "front-rear position adjusting function", the adjustment rate may be 1 cm/second.
And the terminal generates an operation instruction according to the target function and the adjustment rate, and after the operation instruction is sent to the shooting auxiliary equipment, the shooting auxiliary equipment circularly executes corresponding operation according to the adjustment rate in the operation instruction.
After the above step 310, in response to receiving the voice control stop instruction, the terminal sends an operation stop instruction to the shooting assistance apparatus through a communication connection with the shooting assistance apparatus. Correspondingly, after the shooting auxiliary equipment receives the operation stop instruction, the shooting auxiliary equipment stops executing corresponding operation.
In a possible implementation manner, after the terminal sends the operation instruction to the shooting auxiliary equipment, whether a voice control stop instruction is received or not is detected, and when the voice control stop instruction is received, the operation stop instruction is sent to the shooting auxiliary equipment. The voice control stop instruction may be an instruction including a preset stop keyword.
Illustratively, as shown in fig. 7, the voice control command 72 received by the terminal 71 is "turn the camera a little to the left", and by recognizing the voice control command 72, the terminal 71 determines that the control target of the voice control command 72 is the shooting assistance apparatus 73, and determines that the target function controlled by the voice control command 72 is "left-right shooting angle adjustment function".
Since the control command 72 does not include an explicit angle adjustment amount in the voice, the terminal 71 generates an operation command according to the target function and the adjustment rate (for example, 5 degrees/second), and transmits the operation command to the shooting assistance apparatus 73. Upon receiving the operation instruction, the shooting assistance apparatus 73 adjusts the shooting angle to the left in accordance with the adjustment rate.
When receiving the voice control end instruction 74 "OK", the terminal 71 sends an operation stop instruction to the photographing auxiliary apparatus 73, and the photographing auxiliary apparatus 73 stops adjusting the photographing angle to the left upon receiving the stop instruction.
In the embodiment, after receiving the voice control instruction, the terminal controls the shooting auxiliary equipment to perform corresponding operation circularly according to the adjustment rate, and when receiving the voice control ending instruction, the terminal controls the shooting auxiliary equipment to stop the adjustment operation, so that a user does not need to explicitly indicate the adjustment amount, and the intelligence degree and the accuracy of the voice control shooting auxiliary equipment are improved.
Referring to fig. 8, a block diagram of an image capturing apparatus according to an embodiment of the present disclosure is shown. The apparatus may be implemented as all or a portion of the terminal in software, hardware, or a combination of both. The device includes:
a voice instruction receiving module 801, configured to receive a voice control instruction in a shooting scene;
an object determining module 802, configured to identify the voice control instruction, and determine a control object indicated by the voice control instruction, where the control object is a terminal or a shooting auxiliary device;
an operation instruction generating module 803, configured to generate an operation instruction according to the voice control instruction in response to that the control object is the shooting assistance device, where the operation instruction conforms to an instruction format of the shooting assistance device;
an operation instruction sending module 804, configured to send the operation instruction to the shooting auxiliary device through a communication connection with the shooting auxiliary device, where the shooting auxiliary device is configured to execute a corresponding operation according to the operation instruction.
Optionally, the operation instruction generating module 803 includes:
the function determining unit is used for determining a target function controlled by the voice control instruction;
and the first generating unit is used for responding to a first adjustment amount corresponding to the target function contained in the voice control instruction and generating the operation instruction according to the target function and the first adjustment amount.
Optionally, the operation instruction generating module 803 further includes:
the image recognition unit is used for responding to the voice control instruction without the first adjustment amount corresponding to the target function, and performing image recognition on a framing picture to obtain an image recognition result;
the adjustment quantity determining unit is used for determining a second adjustment quantity corresponding to the target function according to the image recognition result;
and the second generating unit is used for generating the operation instruction according to the target function and the second adjustment amount.
Optionally, the adjustment amount determining unit is configured to:
determining original object parameters of a shot object in the view-finding picture according to the image recognition result, and determining target object parameters of the shot object in the view-finding picture according to the target function, wherein the object parameters comprise at least one of size parameters, position parameters or angle parameters;
and determining the second adjustment amount according to the target object parameter and the original object parameter, wherein after the shooting auxiliary equipment adjusts the target function according to the second adjustment amount, the object parameter of the target object in the framing picture is the target object parameter.
Optionally, the adjustment amount determining unit is configured to:
determining a parameter difference between the target object parameter and the original object parameter;
and determining the second adjustment amount corresponding to the target function according to the mapping relation between the parameter difference and the adjustment amount.
Optionally, the apparatus further comprises:
the upper limit determining module is used for determining the upper limit of the adjustment amount corresponding to the target function in the current state according to the state information of the shooting auxiliary equipment;
the first generating unit or the second generating unit is used for responding to the adjustment amount corresponding to the target function being smaller than or equal to the adjustment amount upper limit, and executing the step of generating the operation instruction according to the target function and the adjustment amount;
and the prompt module is used for responding to the fact that the adjustment amount corresponding to the target function is larger than the upper limit of the adjustment amount and carrying out adjustment prompt in a preset mode.
Optionally, the operation instruction generating module 803 further includes:
a third generating unit, configured to generate, in response to that the voice control instruction does not include the first adjustment amount corresponding to the target function, the operation instruction according to the target function and an adjustment rate corresponding to the target function, where the adjustment rate is used to represent an adjustment amount corresponding to the target function in a unit time length, and the shooting assistance device is configured to cyclically execute corresponding operations according to the adjustment rate in the operation instruction;
the device further comprises:
and the stop instruction sending module is used for responding to the received voice control stop instruction and sending an operation stop instruction to the shooting auxiliary equipment through communication connection with the shooting auxiliary equipment, and the shooting auxiliary equipment is used for stopping executing corresponding operation according to the operation stop instruction.
Optionally, the object determining module 802 is configured to:
identifying the voice control instruction, and determining a target function controlled by the voice control instruction;
responding to the target function belonging to a first function corresponding to the terminal, and determining that the control object is the terminal;
and determining the control object as the shooting auxiliary equipment in response to the target function belonging to a second function corresponding to the shooting auxiliary equipment.
Referring to fig. 9, a block diagram of a terminal according to an exemplary embodiment of the present application is shown. A terminal in the present application may include one or more of the following components: a processor 910 and a memory 920.
Processor 910 may include one or more processing cores. The processor 910 connects various parts within the entire terminal using various interfaces and lines, performs various functions of the terminal and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 920 and calling data stored in the memory 920. Alternatively, the processor 910 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 910 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Neural-Network Processing Unit (NPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing contents required to be displayed by the touch display screen; the NPU is used for realizing an Artificial Intelligence (AI) function; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 910, but may be implemented by a single chip.
The Memory 920 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). Optionally, the memory 920 includes a non-transitory computer-readable medium. The memory 920 may be used to store instructions, programs, code sets, or instruction sets. The memory 920 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like; the storage data area may store data (such as audio data, a phonebook) created according to the use of the terminal, and the like.
The terminal in the embodiment of the present application further includes a camera module 930 and a communication module 940. Wherein. The camera assembly 930 may be a front camera or a rear camera of the terminal for image acquisition; the communication component 940 may be a bluetooth component, a WiFi component, an NFC component, or the like, for data communication with an external device (such as a shooting assistance device).
In addition, those skilled in the art will appreciate that the configurations of the terminals illustrated in the above-described figures do not constitute limitations on the terminals, as the terminals may include more or less components than those illustrated, or some components may be combined, or a different arrangement of components may be used. For example, the terminal further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a speaker, a microphone, a power supply, and other components, which are not described herein again.
Referring to fig. 10, a system architecture diagram of a photographing control system according to an exemplary embodiment of the present application is shown. The shooting control system comprises a terminal 1010 and a shooting auxiliary device 1020, wherein the shooting auxiliary device 1020 is used for assisting the terminal 1010 in shooting. A communication connection is established between the terminal 1010 and the photographing assistant 1020, and the terminal 1010 includes the terminal as described in the above embodiments.
The embodiment of the present application further provides a computer-readable storage medium, which stores at least one instruction, where the at least one instruction is used for being executed by a processor to implement the image capturing method according to the embodiment.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the image capturing method provided by the above-described embodiment.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. An image shooting method is characterized in that the method is used for a terminal with a shooting function, a communication connection is established between the terminal and a shooting auxiliary device, and the shooting auxiliary device is used for assisting the terminal in shooting;
the method comprises the following steps:
receiving a voice control instruction in a shooting scene;
identifying the voice control instruction, and determining a control object indicated by the voice control instruction, wherein the control object is the terminal or the shooting auxiliary equipment, and the control object is determined based on a target function controlled by the voice control instruction;
responding to the control object as the shooting auxiliary equipment, and generating an operation instruction according to the voice control instruction, wherein the operation instruction conforms to the instruction format of the shooting auxiliary equipment;
and sending the operation instruction to the shooting auxiliary equipment through communication connection with the shooting auxiliary equipment, wherein the shooting auxiliary equipment is used for executing corresponding operation according to the operation instruction.
2. The method of claim 1, wherein generating operational instructions according to the voice control instructions comprises:
determining the target function controlled by the voice control instruction;
and responding to a first adjustment amount corresponding to the target function contained in the voice control instruction, and generating the operation instruction according to the target function and the first adjustment amount.
3. The method of claim 2, wherein after determining the target function controlled by the voice control instruction, the method comprises:
performing image recognition on a framing picture in response to the voice control instruction not containing the first adjustment amount corresponding to the target function to obtain an image recognition result;
determining a second adjustment amount corresponding to the target function according to the image recognition result;
and generating the operation instruction according to the target function and the second adjustment amount.
4. The method of claim 3, wherein the determining a second adjustment amount corresponding to the target function according to the image recognition result comprises:
determining original object parameters of a shot object in the view-finding picture according to the image recognition result, and determining target object parameters of the shot object in the view-finding picture according to the target function, wherein the object parameters comprise at least one of size parameters, position parameters or angle parameters;
and determining the second adjustment amount according to the target object parameter and the original object parameter, wherein after the shooting auxiliary equipment adjusts the target function according to the second adjustment amount, the object parameter of the target object in the framing picture is the target object parameter.
5. The method of claim 4, wherein determining the second adjustment amount based on the target object parameter and the original object parameter comprises:
determining a parameter difference between the target object parameter and the original object parameter;
and determining the second adjustment amount corresponding to the target function according to the mapping relation between the parameter difference and the adjustment amount.
6. The method of any of claims 2 to 5, wherein after determining the target function controlled by the voice control command, the method further comprises:
determining an upper limit of an adjustment amount corresponding to the target function in the current state according to the state information of the shooting auxiliary equipment;
responding to the adjustment amount corresponding to the target function being smaller than or equal to the adjustment amount upper limit, executing the step of generating the operation instruction according to the target function and the adjustment amount;
and responding to the fact that the adjustment amount corresponding to the target function is larger than the adjustment amount upper limit, and performing adjustment prompt in a preset mode.
7. The method of claim 2, wherein after determining the target function controlled by the voice control instruction, the method further comprises:
responding to the voice control instruction without the first adjustment amount corresponding to the target function, generating the operation instruction according to the target function and the adjustment rate corresponding to the target function, wherein the adjustment rate is used for representing the adjustment amount corresponding to the target function in unit time length, and the shooting auxiliary equipment is used for circularly executing corresponding operation according to the adjustment rate in the operation instruction;
after the operation instruction is sent to the shooting auxiliary equipment through the communication connection with the shooting auxiliary equipment, the method further comprises the following steps:
and responding to the received voice control stop instruction, and sending an operation stop instruction to the shooting auxiliary equipment through communication connection with the shooting auxiliary equipment, wherein the shooting auxiliary equipment is used for stopping executing corresponding operation according to the operation stop instruction.
8. The method according to any one of claims 1 to 5, wherein the recognizing the voice control command and determining the control object indicated by the voice control command comprises:
recognizing the voice control instruction, and determining the target function controlled by the voice control instruction;
responding to the target function belonging to a first function corresponding to the terminal, and determining that the control object is the terminal;
and determining the control object as the shooting auxiliary equipment in response to the target function belonging to a second function corresponding to the shooting auxiliary equipment.
9. An image shooting device is characterized in that the device is used for a terminal with a shooting function, a communication connection is established between the terminal and a shooting auxiliary device, and the shooting auxiliary device is used for assisting the terminal in shooting;
the device comprises:
the voice command receiving module is used for receiving a voice control command in a shooting scene;
the object determination module is used for identifying the voice control instruction and determining a control object indicated by the voice control instruction, wherein the control object is the terminal or the shooting auxiliary equipment and is determined based on a target function controlled by the voice control instruction;
the operation instruction generating module is used for responding to the control object as the shooting auxiliary equipment and generating an operation instruction according to the voice control instruction, wherein the operation instruction conforms to the instruction format of the shooting auxiliary equipment;
and the operation instruction sending module is used for sending the operation instruction to the shooting auxiliary equipment through communication connection with the shooting auxiliary equipment, and the shooting auxiliary equipment is used for executing corresponding operation according to the operation instruction.
10. A terminal, characterized in that the terminal comprises a processor and a memory; the memory stores at least one instruction for execution by the processor to implement the image capture method of any of claims 1 to 8.
11. The shooting control system is characterized by comprising a terminal and shooting auxiliary equipment, wherein the shooting auxiliary equipment is used for assisting the terminal in shooting;
a communication connection is established between the terminal and the shooting auxiliary equipment;
the terminal comprises a terminal according to claim 10.
12. A computer-readable storage medium having stored thereon at least one instruction for execution by a processor to perform the image capture method of any of claims 1 to 8.
CN202010659336.4A 2020-07-09 2020-07-09 Image shooting method, device, terminal and storage medium Active CN111757007B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010659336.4A CN111757007B (en) 2020-07-09 2020-07-09 Image shooting method, device, terminal and storage medium
PCT/CN2021/095541 WO2022007518A1 (en) 2020-07-09 2021-05-24 Image photographing method and apparatus, terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010659336.4A CN111757007B (en) 2020-07-09 2020-07-09 Image shooting method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111757007A CN111757007A (en) 2020-10-09
CN111757007B true CN111757007B (en) 2022-02-08

Family

ID=72711696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010659336.4A Active CN111757007B (en) 2020-07-09 2020-07-09 Image shooting method, device, terminal and storage medium

Country Status (2)

Country Link
CN (1) CN111757007B (en)
WO (1) WO2022007518A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111757007B (en) * 2020-07-09 2022-02-08 深圳市欢太科技有限公司 Image shooting method, device, terminal and storage medium
CN114173061B (en) * 2021-12-13 2023-09-29 深圳万兴软件有限公司 Multi-mode camera shooting control method and device, computer equipment and storage medium
CN116389694A (en) * 2023-06-05 2023-07-04 河北思恒电子科技有限公司 Video monitoring method and video monitoring robot based on artificial intelligence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102970483A (en) * 2012-11-26 2013-03-13 广东欧珀移动通信有限公司 Voice control method and device of camera head
CN105844877A (en) * 2016-03-16 2016-08-10 广东欧珀移动通信有限公司 Method, apparatus and system for controlling selfie stick with mobile terminal, and mobile terminal
CN208459748U (en) * 2018-07-13 2019-02-01 北京京东尚科信息技术有限公司 A kind of film studio
CN110602391A (en) * 2019-08-30 2019-12-20 Oppo广东移动通信有限公司 Photographing control method and device, storage medium and electronic equipment
CN110868542A (en) * 2019-11-22 2020-03-06 深圳传音控股股份有限公司 Photographing method, device and equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5980124A (en) * 1998-08-24 1999-11-09 Eastman Kodak Company Camera tripod having speech recognition for controlling a camera
JP2004037998A (en) * 2002-07-05 2004-02-05 Denso Corp Vocal controller
KR20120002801A (en) * 2010-07-01 2012-01-09 엘지전자 주식회사 Monitering camera and method for tracking audio source thereof
CN103702028A (en) * 2013-12-19 2014-04-02 小米科技有限责任公司 Method and device for controlling shooting and terminal equipment
US10178293B2 (en) * 2016-06-22 2019-01-08 International Business Machines Corporation Controlling a camera using a voice command and image recognition
CN105895099B (en) * 2016-06-28 2019-07-23 Oppo广东移动通信有限公司 Control method, device and the intelligent terminal of automatic telescopic self-shooting bar
CN106131413B (en) * 2016-07-19 2020-04-14 纳恩博(北京)科技有限公司 Shooting equipment and control method thereof
CN108712610A (en) * 2018-05-18 2018-10-26 北京京东尚科信息技术有限公司 Intelligent camera
CN111757007B (en) * 2020-07-09 2022-02-08 深圳市欢太科技有限公司 Image shooting method, device, terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102970483A (en) * 2012-11-26 2013-03-13 广东欧珀移动通信有限公司 Voice control method and device of camera head
CN105844877A (en) * 2016-03-16 2016-08-10 广东欧珀移动通信有限公司 Method, apparatus and system for controlling selfie stick with mobile terminal, and mobile terminal
CN208459748U (en) * 2018-07-13 2019-02-01 北京京东尚科信息技术有限公司 A kind of film studio
CN110602391A (en) * 2019-08-30 2019-12-20 Oppo广东移动通信有限公司 Photographing control method and device, storage medium and electronic equipment
CN110868542A (en) * 2019-11-22 2020-03-06 深圳传音控股股份有限公司 Photographing method, device and equipment

Also Published As

Publication number Publication date
WO2022007518A1 (en) 2022-01-13
CN111757007A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN111757007B (en) Image shooting method, device, terminal and storage medium
EP4199529A1 (en) Electronic device for providing shooting mode based on virtual character and operation method thereof
CN113727012B (en) Shooting method and terminal
WO2019137131A1 (en) Image processing method, apparatus, storage medium, and electronic device
CN111491102B (en) Detection method and system for photographing scene, mobile terminal and storage medium
CN109831636B (en) Interactive video control method, terminal and computer readable storage medium
CN109348135A (en) Photographic method, device, storage medium and terminal device
CN103227907A (en) Method, device and system for remotely controlling image capture unit
CN108200337B (en) Photographing processing method, device, terminal and storage medium
WO2019047046A1 (en) Photographing method and user terminal
CN108156376A (en) Image-pickup method, device, terminal and storage medium
CN108600610A (en) Shoot householder method and device
WO2016165614A1 (en) Method for expression recognition in instant video and electronic equipment
CN108632543A (en) Method for displaying image, device, storage medium and electronic equipment
CN107277368A (en) A kind of image pickup method and filming apparatus for smart machine
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108924413B (en) Shooting method and mobile terminal
CN108989666A (en) Image pickup method, device, mobile terminal and computer-readable storage medium
CN111185903B (en) Method and device for controlling mechanical arm to draw portrait and robot system
CN108668061A (en) Intelligent camera
CN112073639A (en) Shooting control method and device, computer readable medium and electronic equipment
WO2023087929A1 (en) Assisted photographing method and apparatus, and terminal and computer-readable storage medium
CN115439307A (en) Style conversion method, style conversion model generation method, and style conversion system
CN111310701B (en) Gesture recognition method, device, equipment and storage medium
CN109194820A (en) A kind of image pickup method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant