Nothing Special   »   [go: up one dir, main page]

CN106454074A - Mobile terminal and shooting processing method - Google Patents

Mobile terminal and shooting processing method Download PDF

Info

Publication number
CN106454074A
CN106454074A CN201610832191.7A CN201610832191A CN106454074A CN 106454074 A CN106454074 A CN 106454074A CN 201610832191 A CN201610832191 A CN 201610832191A CN 106454074 A CN106454074 A CN 106454074A
Authority
CN
China
Prior art keywords
shooting
environment
pose
mobile terminal
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610832191.7A
Other languages
Chinese (zh)
Inventor
孙垚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201610832191.7A priority Critical patent/CN106454074A/en
Publication of CN106454074A publication Critical patent/CN106454074A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a mobile terminal and a shooting processing method. The mobile terminal comprises a pre-acquisition unit for pre-acquisition of an environment, an analysis unit for analyzing a pre-acquisition result of the environment to obtain the pose, fitting the environment, of a shooting object and the shooting orientation, fitting the environment, of a photographer, a prompting unit used for prompting operation according to the pose, fitting the environment, of the shooting object and the shooting orientation, fitting the environment, of the photographer, and a shooting unit used for shooting the environment when a detection result shows that the requirements of the prompting operation are met. According to the mobile terminal and the shooting processing method, a user can perform high-quality shooting in an efficient manner.

Description

Mobile terminal and shooting processing method
Technical Field
The present invention relates to a shooting technology, and in particular, to a mobile terminal and a shooting processing method.
Background
Mobile terminals such as smart phones and tablet computers are commonly used in modern society.
Photographing (such as taking a picture or taking a picture) is a necessary function of today's mobile terminals, and is very familiar to users and is often used.
For most users, the most difficult to master is how to put a proper posture in front of a beautiful scene and how to fuse a beautiful posture with the scene, and the most difficult to master and realize is for common mobile terminal users.
The related art has not yet made an effective solution for helping a user to perform high-quality photographing in an efficient manner.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a mobile terminal and a shooting processing method to solve at least one problem in the prior art.
The technical scheme of the embodiment of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a mobile terminal, where the mobile terminal includes:
the pre-acquisition unit is used for pre-acquiring the environment;
the analysis unit is used for analyzing the result of pre-acquisition of the environment to obtain the pose of the shooting object matched with the environment and the shooting direction of the shooter matched with the environment;
the prompting unit is used for performing prompting operation based on the pose of the shooting object matched with the environment and the shooting direction of the shooter matched with the environment;
and the shooting unit is used for shooting the environment when the prompt operation is detected to be met.
In the foregoing solution, the analysis unit is further configured to analyze the pre-acquisition result based on the pose specified by the photographer, to obtain a position in the environment adapted to the pose, and a shooting orientation for shooting a corresponding pose.
In the above scheme, the analysis unit is further configured to analyze the pre-acquisition result to obtain a feature of the object in the environment, and search a database for a shooting result of the object with the feature;
and analyzing the pose of the object in the shooting result and the shooting orientation for shooting the pose.
In the above scheme, the analysis unit is further configured to analyze a pose of an object in the shooting result and present a preview of the shooting result before shooting the shooting orientation forming the shooting result;
and analyzing the pose of the object in the selected shooting result and the shooting direction forming the shooting result.
In the foregoing solution, the analyzing unit is further configured to analyze the pre-acquisition result when the pre-acquisition result is acquired based on a fixed shooting orientation, so as to obtain a pose of the shooting object adapted to the environment when shooting is performed with the fixed shooting orientation.
In the above scheme, the analyzing unit is further configured to analyze the pre-acquisition result when the pre-acquisition result is acquired based on a plurality of shooting orientations, so as to obtain a pose of the shooting object adapted to the environment when shooting is performed in each of the shooting orientations; or,
and analyzing the pre-acquisition result to obtain the optimal shooting orientation in the plurality of shooting orientations and the pose of the object matched with the environment when the object is shot in the optimal orientation.
In the above scheme, the prompting unit is further configured to identify a difference between the current shooting orientation and the adapted shooting orientation, and prompt to adjust a shooting direction and/or position; and identifying the difference between the current pose of the shooting object and the adaptive pose, and prompting to adjust the pose of the shooting object.
In the foregoing solution, the prompting unit is further configured to identify a position corresponding to the adapted pose in a shooting preview interface of the environment, and present an outline of the adapted pose at a corresponding position in the shooting preview interface.
In a second aspect, an embodiment of the present invention provides a shooting processing method, where the method includes:
pre-collecting the environment;
analyzing a result of pre-collecting the environment to obtain a pose of a shooting object adapted to the environment and a shooting orientation of a shooter adapted to the environment;
performing prompt operation based on the pose of the shooting object and the shooting direction of the shooting person;
and when the condition that the prompt operation is met is detected, shooting operation is carried out on the environment.
In the above-mentioned scheme, the analysis carries out the result of gathering in advance to the environment, obtains to shoot the object adaptation the position appearance of environment and shooter adaptation the shooting position of environment, includes:
and analyzing the pre-acquisition result based on the posture designated by the photographer to obtain the position matched with the posture in the environment and the shooting direction for shooting the corresponding posture.
In the above-mentioned scheme, the analysis carries out the result of gathering in advance to the environment, obtains to shoot the object adaptation the position appearance of environment and shooter adaptation the shooting position of environment, includes:
analyzing the pre-acquisition result to obtain the characteristics of the object in the environment, and searching the shooting result of the object which accords with the characteristics in a database;
and analyzing the pose of the object in the shooting result and the shooting orientation for forming the shooting result through shooting.
In the above scheme, the analyzing the pose of the object in the shooting result and the shooting orientation of the shooting result, including:
analyzing the pose of the object in the shooting result and presenting the preview of the shooting result before shooting to form the shooting direction of the shooting result;
and analyzing the pose of the object in the selected shooting result and the shooting orientation for shooting the pose.
In the above-mentioned scheme, the analysis carries out the result of gathering in advance to the environment, obtains to shoot the object adaptation the position appearance of environment and shooter adaptation the shooting position of environment, includes:
and when the pre-acquisition result is acquired based on a fixed shooting orientation, analyzing the pre-acquisition result to obtain the pose of the shooting object matched with the environment when shooting is carried out in the fixed shooting orientation.
In the above-mentioned scheme, the analysis carries out the result of gathering in advance to the environment, obtains to shoot the object adaptation the position appearance of environment and shooter adaptation the shooting position of environment, includes:
when the pre-acquisition result is acquired based on a plurality of shooting orientations, analyzing the pre-acquisition result to obtain the pose of the shooting object matched with the environment when shooting is carried out in each shooting orientation;
or,
and analyzing the pre-acquisition result to obtain the optimal shooting orientation in the plurality of shooting orientations and the pose of the object matched with the environment when the object is shot in the optimal orientation.
In the above-mentioned scheme, the performing a prompt operation based on the pose of the environment adapted by the photographic object and the photographic orientation of the environment adapted by the photographer includes:
identifying the difference between the current shooting direction and the adaptive shooting direction, and prompting to adjust the shooting direction and/or position;
and identifying the difference between the current pose of the shooting object and the adaptive pose, and prompting to adjust the pose of the shooting object.
In the above scheme, the prompting to adjust the pose of the photographic object includes:
and identifying a position corresponding to the adapted pose in a shooting preview interface of the environment, and presenting the outline of the adapted pose at the corresponding position in the shooting preview interface.
The embodiment of the invention can realize the following beneficial effects: a photographer can shoot based on the shooting direction, and can quickly shoot a satisfactory image for the environment without frequently changing the shooting position and the shooting direction; the shooting object does not need to change the position and the posture in the environment frequently, the best effect of being integrated into the environment can be achieved only by implementing the prompted pose, a good shooting effect can be achieved, the shooting efficiency is improved, and the human-computer interaction efficiency and the intelligent degree in the shooting process are improved.
Drawings
Fig. 1 is a schematic hardware configuration diagram of an alternative mobile terminal 100 for implementing various embodiments of the present invention;
FIG. 2 is a diagram of a wireless communication system for the mobile terminal 100 shown in FIG. 1;
fig. 3 is a functional structure diagram of an alternative mobile terminal 400 according to various embodiments of the present invention;
FIG. 4-1 is a schematic diagram of an implementation flow of an alternative shooting processing method according to an embodiment of the present invention;
fig. 4-2 is a schematic flow chart illustrating an implementation of an alternative shooting processing method according to an embodiment of the present invention;
FIG. 5-1 is a schematic diagram of an implementation flow of an alternative shooting processing method according to an embodiment of the present invention;
fig. 5-2 is a schematic flow chart illustrating an implementation of an alternative shooting processing method according to an embodiment of the present invention;
fig. 6 is a schematic implementation flow diagram of an alternative shooting processing method according to an embodiment of the present invention.
Detailed Description
It should be understood that the embodiments described herein are only for explaining the technical solutions of the present invention, and are not intended to limit the scope of the present invention.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a navigation device, etc., and a stationary terminal such as a digital TV, a desktop computer, etc. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 1 is a schematic hardware configuration of an alternative mobile terminal 100 implementing various embodiments of the present invention, and as shown in fig. 1, the mobile terminal 100 may include a wireless communication unit 110, an audio/video (a/V) input unit 120, an output unit 150, a memory 160, an interface unit 170, a controller 180, a power supply unit 190, and the like. Fig. 1 illustrates the mobile terminal 100 having various components, but it is to be understood that not all illustrated components are required to be implemented. More or fewer components may alternatively be implemented. The elements of the mobile terminal 100 will be described in detail below.
The wireless communication unit 110 typically includes one or more components that allow radio communication between the mobile terminal 100 and a wireless communication system or network. For example, the wireless communication unit 110 may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
The mobile communication module 112 transmits and/or receives radio signals to and/or from at least one of a base station (e.g., access point, node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received according to text and/or multimedia messages.
The wireless internet module 113 supports wireless internet access of the mobile terminal 100. The wireless internet module 113 may be internally or externally coupled to the terminal. The wireless internet access technology referred to by the wireless internet module 113 may include Wireless Local Area Network (WLAN), wireless compatibility authentication (Wi-Fi), wireless broadband (Wibro), worldwide interoperability for microwave access (Wimax), High Speed Downlink Packet Access (HSDPA), and the like.
The a/V input unit 120 is used to receive an audio or video signal. The a/V input unit 120 may include a camera 121 and a microphone 122, and the camera 121 processes image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 151. The image frames processed by the cameras 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the construction of the mobile terminal 100. The microphone 122 may receive sounds (audio data) via the microphone in a phone call mode, a recording mode, a voice recognition mode, or the like, and can process such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the mobile communication module 112 in case of a phone call mode. The microphone 122 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The user input unit 130 may generate key input data to control various operations of the mobile terminal 100 according to a command input by a user. The user input unit 130 allows a user to input various types of information, and may include a keyboard, dome sheet, touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), scroll wheel, joystick, and the like. In particular, when the touch pad is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
The interface unit 170 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port (a typical example is a Universal Serial Bus (USB) port), a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The identification module may store various information for authenticating a user using the mobile terminal 100 and may include a User Identity Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like. In addition, a device having an identification module (hereinafter, referred to as an "identification device") may take the form of a smart card, and thus, the identification device may be connected with the mobile terminal 100 via a port or other connection means.
The interface unit 170 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and the external device.
In addition, when the mobile terminal 100 is connected with an external cradle, the interface unit 170 may serve as a path through which power is supplied from the cradle to the mobile terminal 100 or may serve as a path through which various command signals input from the cradle are transmitted to the mobile terminal 100. Various command signals or power input from the cradle may be used as signals for recognizing whether the mobile terminal 100 is accurately mounted on the cradle.
The output unit 150 is configured to provide output signals (e.g., audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
The display unit 151 may display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 151 and the touch pad are overlapped with each other in the form of a layer to form a touch screen, the display unit 151 may serve as an input device and an output device. The display unit 151 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a TOLED (transparent organic light emitting diode) display or the like. Depending on the particular desired implementation, mobile terminal 100 may include two or more display units (or other display devices), for example, mobile terminal 100 may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output module 152 may provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output module 152 may include a speaker, a buzzer, and the like.
The alarm unit 153 may provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alarm unit 153 may provide output in different ways to notify the occurrence of an event. For example, the alarm unit 153 may provide an output in the form of vibration, and when a call, a message, or some other incoming communication (communicating communication) is received, the alarm unit 153 may provide a tactile output (i.e., vibration) to inform the user thereof. By providing such a tactile output, the user can recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 may also provide an output notifying the occurrence of an event via the display unit 151 or the audio output module 152.
The memory 160 may store software programs or the like for processing and controlling operations performed by the controller 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, etc.) that has been output or is to be output. Also, the memory 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
The controller 180 generally controls the overall operation of the mobile terminal 100. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like.
The power supply unit 190 receives external power or internal power and provides appropriate power required to operate various elements and components under the control of the controller 180.
The various embodiments described herein may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, such embodiments may be implemented in the controller 180. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory 160 and executed by the controller 180.
Up to this point, the mobile terminal 100 has been described in terms of its functions. Hereinafter, the slide-type mobile terminal 100 among various types of mobile terminals 100, such as a folder-type, bar-type, swing-type, slide-type mobile terminal 100, and the like, will be described as an example for the sake of brevity. Accordingly, the present invention can be applied to any type of mobile terminal 100, and is not limited to the slide type mobile terminal 100.
The mobile terminal 100 as shown in fig. 1 may be configured to operate with communication systems such as wired and wireless communication systems and satellite-based communication systems that transmit data via frames or packets.
A communication system in which the mobile terminal 100 according to the present invention is capable of operating will now be described with reference to fig. 2.
Such communication systems may use different air interfaces and/or physical layers. For example, the air interface used by the communication system includes, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)), global system for mobile communications (GSM), and the like. By way of non-limiting example, the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
Referring to fig. 2, the CDMA wireless communication system may include a plurality of mobile terminals 100, a plurality of Base Stations (BSs) 270, Base Station Controllers (BSCs) 275, and a Mobile Switching Center (MSC) 280. The MSC 280 is configured to interface with a Public Switched Telephone Network (PSTN) 290. The MSC 280 is also configured to interface with a BSC275, which may be coupled to the base station 270 via a backhaul. The backhaul may be constructed according to any of several known interfaces including, for example, E1/T1, ATM, IP, PPP, frame Relay, HDSL, ADSL, or xDSL. It will be understood that a system as shown in fig. 2 may include multiple BSCs 2750.
Each BS 270 may serve one or more sectors (or regions), each sector covered by a multi-directional antenna or an antenna pointing in a particular direction being radially distant from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 may be configured to support multiple frequency allocations, with each frequency allocation having a particular frequency spectrum (e.g., 1.25MHz, 5MHz, etc.).
The intersection of partitions with frequency allocations may be referred to as a CDMA channel. The BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology. In such a case, the term "base station" may be used to generically refer to a single BSC275 and at least one BS 270. The base stations may also be referred to as "cells". Alternatively, each partition of a particular BS 270 may be referred to as a plurality of cell sites.
As shown in fig. 2, a Broadcast Transmitter (BT)295 transmits a broadcast signal to the mobile terminal 100 operating within the system. A broadcast receiving module 111 as shown in fig. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295. In fig. 2, several satellites 300 are shown, for example, Global Positioning System (GPS) satellites 300 may be employed. The satellite 300 assists in locating at least one of the plurality of mobile terminals 100.
In fig. 2, a plurality of satellites 300 are depicted, but it is understood that useful positioning information may be obtained with any number of satellites. The GPS module 115 as shown in fig. 1 is generally configured to cooperate with satellites 300 to obtain desired positioning information. Other techniques that can track the location of the mobile terminal 100 may be used instead of or in addition to GPS tracking techniques. In addition, at least one GPS satellite 300 may selectively or additionally process satellite DMB transmission.
As a typical operation of the wireless communication system, the BS 270 receives reverse link signals from various mobile terminals 100. The mobile terminal 100 is generally engaged in conversations, messaging, and other types of communications. Each reverse link signal received by a particular base station 270 is processed within the particular BS 270. The obtained data is forwarded to the associated BSC 275. The BSC provides call resource allocation and mobility management functions including coordination of soft handoff procedures between BSs 270. The BSCs 275 also route the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290. Similarly, the PSTN290 interfaces with the MSC 280, the MSC interfaces with the BSCs 275, and the BSCs 275 accordingly control the BS 270 to transmit forward link signals to the mobile terminal 100.
The mobile communication module 112 of the wireless communication unit 110 in the mobile terminal accesses the mobile communication network based on the necessary data (including the user identification information and the authentication information) of the mobile communication network (such as the mobile communication network of 2G/3G/4G, etc.) built in the mobile terminal, so as to transmit the mobile communication data (including the uplink mobile communication data and the downlink mobile communication data) for the services of web browsing, network multimedia playing, etc. of the mobile terminal user.
The wireless internet module 113 of the wireless communication unit 110 implements a function of a wireless hotspot by operating a related protocol function of the wireless hotspot, the wireless hotspot supports access by a plurality of mobile terminals (any mobile terminal other than the mobile terminal), transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for mobile terminal user's services such as web browsing, network multimedia playing, etc. by multiplexing the mobile communication connection between the mobile communication module 112 and the mobile communication network, since the mobile terminal essentially multiplexes the mobile communication connection between the mobile terminal and the communication network for transmitting mobile communication data, the traffic of mobile communication data consumed by the mobile terminal is charged to the communication tariff of the mobile terminal by a charging entity on the side of the communication network, thereby consuming the data traffic of the mobile communication data included in the communication tariff contracted for use by the mobile terminal.
The embodiment of the present invention is proposed based on the hardware structure of the mobile terminal 100 and the communication system.
Example one
In this embodiment, referring to an optional functional structure diagram of the mobile terminal 400 shown in fig. 3, the mobile terminal includes: the device comprises a pre-acquisition unit 410, an analysis unit 420, a prompt unit 430 and a shooting unit 440; the following describes each unit.
The pre-collecting unit 410 is used for pre-collecting the environment, and may be implemented by the camera 121 shown in fig. 1 in practical applications.
In the process of pre-acquisition, the environment is acquired, and a pre-acquisition result, that is, an image formed by acquiring the environment, is presented on a shooting preview interface in real time.
And the analysis unit 420 is used for analyzing the pre-acquisition result based on the posture specified by the photographer, and obtaining the position of the adaptive posture in the environment and the shooting direction for shooting the corresponding posture.
For example, the photographer may set a posture that the photographer desires to have before photographing the environment using the mobile terminal, and the analyzing unit 420 analyzes the pre-photographing result of the environment acquired by the pre-photographing unit 410, analyzes an optimal position of the photographer when the photographer desires to have the posture of the photographer in the environment, and makes the photographer desires to have an optimal visual effect in the position of the photographer.
The position and the posture of the shooting object in the environment are used for indicating the position and the posture of the shooting object in the environment when the shooting person shoots the environment by using the mobile terminal, and the shooting object has the best visual effect when being matched with the position and the posture of the environment. For example, the position of the photographic subject may be described in terms of a distance and a direction from the current position of the photographic subject, and the pose of the photographic subject may be described in terms of the pose of the head, arms, legs, and the like of the photographic subject, so that the photographic subject can be conveniently and quickly adapted to the environment when the pose is prompted.
The shooting azimuth of the photographer adapting to the environment is used for indicating the position of the photographer in the environment and the shooting direction when the photographer shoots the environment and the shooting object by using the mobile terminal. For example, the position of the photographer can be described in a manner of a distance and a direction relative to the current position of the photographer, and the shooting manner of the photographer can be described in a manner of a direction (for example, moving the camera upwards and moving the camera rightwards) adjusted by the current shooting direction of the photographer, so that the photographer can conveniently and quickly implement the shooting direction adaptive to the environment when prompting the pose.
And the prompting unit 430 is configured to perform a prompting operation based on the pose of the shooting object adapted to the environment and the shooting orientation of the photographer adapted to the environment.
The prompting unit 430 identifies a difference between the current shooting orientation of the photographer and the adapted shooting orientation, and prompts the photographer to adjust the shooting direction and/or position. In practical applications, the prompting unit 430 may be a graphical dynamic prompting mode in the shooting preview interface, and may also adopt a voice prompting mode and a vibration prompting mode, where the prompting modes may be flexibly selected, for example, according to a preset mode for prompting. The presentation unit 430 may be implemented by the display unit 151, the audio output module 152, or the alarm unit 153 shown in fig. 1 according to a presentation manner.
The prompting unit 430 identifies the difference between the current pose of the shooting object and the adapted pose, and prompts the shooting object to adjust the pose until the pose of the shooting object is consistent with the pose of the adapted environment. In practical applications, the prompting unit 430 can be used for graphically and dynamically prompting in the shooting preview interface.
For example, a position corresponding to the adapted pose is identified in a shooting preview interface of the environment, and an outline of the adapted pose is presented at the corresponding position in the shooting preview interface.
Of course, the prompting unit 430 may also prompt that the pose of the shooting object is consistent with the pose of the adapted environment by adopting a voice prompting mode and a vibration prompting mode. The prompting mode can be flexibly selected, for example, prompting according to a preset mode.
The shooting unit 440, which may be implemented by the camera 121 shown in fig. 1, is configured to detect that a prompt operation is satisfied, that is, when the current shooting orientation of the photographer is consistent with the shooting orientation adapted to the environment, and when the current pose of the shooting object is consistent with the pose adapted to the environment, prompt the photographer to perform a shooting operation on the environment and the shooting object in the environment, for example, a shooting function key of the mobile terminal may be triggered in a graphical, voice, or vibration manner, so as to capture a shooting operation on the environment, for example, to take a picture or a video.
In addition, it should be noted that, in the embodiment, the description is made in terms of the photographer and the subject, but it is not required to strictly distinguish the photographer and the subject, for example, the photographer and the subject may be the same user, and in this case, the mobile terminal provided in the embodiment may be applied to a self-portrait scene. Particularly, when the mobile terminal has 2 screens and is used for a self-timer scene, one of the screens can be used for displaying a shooting preview interface (facing away from a user), and the other screen (facing towards the user) can be used for graphically prompting a shooting orientation in combination with a shooting position, so that the user can quickly shoot a self-implemented pose in the corresponding shooting orientation, and the defect that a high-quality image is difficult to obtain in a conventional self-timer scene is overcome.
The present embodiment has such advantageous effects:
determining the pose of a shot object in a mode of determining the best position for implementing the pose in the environment according to the pre-acquisition environment and the analyzed shooting pose expected by the photographer, and obtaining the best shooting direction with the pose of the shot object by analyzing the pre-acquisition result; by means of prompting the shooting direction and the shooting pose, a photographer can shoot based on the optimal shooting direction, and a satisfactory image can be shot quickly for the environment without frequently changing the shooting position and the shooting direction; the shooting method has the advantages that the position and the posture of the shooting object in the environment do not need to be changed frequently, the best effect of being integrated into the environment can be achieved only by implementing the prompted pose, a good shooting effect can be achieved, the shooting efficiency is improved, and the human-computer interaction efficiency and the intelligent degree in the shooting process are improved.
Example two
In this embodiment, referring to an optional functional structure diagram of the mobile terminal 400 shown in fig. 3, the mobile terminal includes: the device comprises a pre-acquisition unit 410, an analysis unit 420, a prompt unit 430 and a shooting unit 440; the following describes each unit.
A pre-acquisition unit 410 for pre-acquiring the environment.
In the process of pre-acquisition, the environment is acquired, and a pre-acquisition result, that is, an image formed by acquiring the environment, is presented on a shooting preview interface in real time.
The analysis unit 420 is used for analyzing the pre-acquisition result to obtain the characteristics of the object in the environment and searching the shooting result of the object which meets the characteristics in the database; and analyzing the pose of the object in the shooting result and the shooting orientation of the shooting result.
For example, the analysis unit 420 may invoke computing resources of the mobile terminal itself, analyze features of an environmental object, such as age, gender, and dressing style of a person, of a pre-collection result, and may also find a shooting result meeting the features in a database by feature matching in combination with a local database of the mobile terminal (storing candidate shooting results, such as photos, videos, and the like), and analyze the shooting result, and analyze a pose of an object in the shooting result, and a shooting orientation of the shooting result. Of course, when multiple shooting results are found, previews of the shooting results can be presented in the shooting preview results, so that a photographer can select a shooting result expected to be simulated, the pose of an object in the shooting result selected by the photographer is analyzed, and the shooting direction of the shooting result is formed. The analysis unit 420 determines the adaptive shooting orientation and pose by calling the local computing resources and database of the mobile terminal, so that the mobile terminal can assist a photographer in shooting a high-quality image anytime and anywhere without being affected by whether the mobile terminal can perform network communication currently or not.
For example, the analysis unit 420 may call computing resources in the cloud, analyze features of an environmental object, such as the age, sex, and dressing style of a person, of a scene, and features of an object, such as features of a contour and a texture, of a pre-acquisition result, combine with a database (storing candidate shooting results, such as a photo and a video) in the cloud, find a shooting result that matches the features in the database in a feature matching manner, analyze the shooting result, analyze a pose of the object in the shooting result, and shoot a shooting orientation of the shooting result. Of course, when multiple shooting results are found in the cloud, the analysis unit 420 may present a preview of each shooting result in the shooting preview result, so that the photographer selects a shooting result desired to be simulated, and analyzes the pose of the object in the shooting result selected by the photographer and the shooting orientation of the shooting result. The analysis unit 420 determines the adaptive shooting position and pose by calling the computing resources and the database of the cloud, and by means of the strong computing power of the cloud and the database storing a large number of shooting results, more shooting results can be efficiently acquired for a photographer to select, and the efficiency and quality of shooting images are further improved.
For the description of the pose of the shooting object adapted to the environment and the shooting orientation of the photographer adapted to the environment, reference may be made to the description of the first embodiment, which is not described herein again.
And the prompting unit 430 is configured to perform a prompting operation based on the pose of the shooting object adapted to the environment and the shooting orientation of the photographer adapted to the environment.
The prompting unit 430 is configured to perform processing description of the prompting operation based on the pose of the shooting object adapted to the environment and the shooting orientation of the photographer adapted to the environment, which can be referred to the description of the first embodiment, and is not described herein again.
The shooting unit 440 is configured to, when it is detected that the prompt operation is satisfied, that is, the current shooting orientation of the photographer is consistent with the shooting orientation adapted to the environment, and when the current pose of the shooting object is consistent with the pose adapted to the environment, prompt the photographer to perform a shooting operation on the environment and the shooting object in the environment, for example, a shooting function key of the mobile terminal may be triggered in a graphical, voice, or vibration manner, so as to capture a shooting operation on the environment.
The present embodiment has such advantageous effects:
the shooting result which can be selected by a photographer to expect imitation shooting is obtained through analysis of the pre-acquisition environment, analysis is carried out according to the shooting result selected by the photographer, the shooting result selected by the imitation shooting is determined to enable the corresponding shooting position and pose, shooting can be carried out by the photographer based on the optimal shooting position through a mode of prompting the shooting position and the shooting pose, the technical effect of imitation shooting the selected shooting result can be achieved without frequently changing the shooting position and the shooting direction, shooting efficiency is improved, and human-computer interaction efficiency and intelligent degree in the shooting process are improved.
EXAMPLE III
In this embodiment, referring to an optional functional structure diagram of the mobile terminal 400 shown in fig. 3, the mobile terminal includes: the device comprises a pre-acquisition unit 410, an analysis unit 420, a prompt unit 430 and a shooting unit 440; the following describes each unit.
A pre-acquisition unit 410 for pre-acquiring the environment.
In the process of pre-acquisition, the environment is acquired, and a pre-acquisition result, that is, an image formed by acquiring the environment, is presented on a shooting preview interface in real time.
And the analysis unit 420 is configured to analyze the pre-acquisition orientation based on the pre-acquisition result, and determine, according to the pre-acquisition shooting orientation, a pose of the shooting object adapted to the environment when the photographer shoots in the shooting orientation. Illustratively, the following are included:
1) when the analysis unit 420 analyzes that the pre-acquisition result is acquired based on the fixed shooting orientation, the pre-acquisition result is analyzed to obtain the pose of the shooting object adapted to the environment when shooting with the fixed shooting orientation.
2 when the analysis unit 420 analyzes that the result of the pre-acquisition is acquired based on a plurality of shooting orientations, the pre-acquisition result is analyzed to obtain the pose of the shooting object adapted to the environment when shooting is performed in each shooting orientation.
3) When the analysis unit 420 analyzes the pre-acquisition result to obtain the best shooting orientation of the plurality of shooting orientations and the pose at which the object is adapted to the environment when shot in the best orientation.
For the description of the pose of the shooting object adapted to the environment and the shooting orientation of the photographer adapted to the environment, reference may be made to the description of the first embodiment, which is not described herein again.
And the prompting unit 430 is configured to perform a prompting operation based on the pose of the shooting object adapted to the environment and the shooting orientation of the photographer adapted to the environment.
The prompting unit 430 is configured to perform processing description of the prompting operation based on the pose of the shooting object adapted to the environment and the shooting orientation of the photographer adapted to the environment, which can be referred to the description of the first embodiment, and is not described herein again.
The shooting unit 440 is configured to, when it is detected that the prompt operation is satisfied, that is, the current shooting orientation of the photographer is consistent with the shooting orientation adapted to the environment, and when the current pose of the shooting object is consistent with the pose adapted to the environment, prompt the photographer to perform a shooting operation on the environment and the shooting object in the environment, for example, a shooting function key of the mobile terminal may be triggered in a graphical, voice, or vibration manner, so as to capture a shooting operation on the environment, for example, to take a picture or a video.
In addition, it should be noted that, in the embodiment, the description is made in terms of the photographer and the subject, but it is not required to strictly distinguish the photographer and the subject, for example, the photographer and the subject may be the same user, and in this case, the mobile terminal provided in the embodiment may be applied to a self-portrait scene. Particularly, when the mobile terminal has 2 screens and is used for a self-timer scene, one of the screens can be used for displaying a shooting preview interface (facing away from a user), and the other screen (facing towards the user) can be used for graphically prompting a shooting orientation in combination with a shooting position, so that the user can quickly shoot a self-implemented pose in the corresponding shooting orientation, and the defect that a high-quality image is difficult to obtain in a conventional self-timer scene is overcome.
The present embodiment has such advantageous effects:
the method comprises the steps of analyzing a shooting orientation expected by a photographer through a pre-collected environment, and determining a pose of the environment matched with the shooting orientation; by means of prompting the shooting direction and the shooting pose, when a photographer can shoot based on an expected shooting direction, the best shooting pose of a shooting object in the shooting direction can be automatically obtained, and a satisfactory image can be quickly shot for an environment without frequently changing the shooting position and the shooting direction of the shooting object; not only can obtain good shooting effect, promote shooting efficiency moreover, promoted shooting in-process human-computer interaction's efficiency and intelligent degree.
Here, it should be noted that: the above description of the mobile terminal embodiment is similar to the description of the method embodiment described below, and has similar beneficial effects to the mobile terminal embodiment, and therefore, the description thereof is omitted. For technical details that are not disclosed in the embodiment of the method of the present invention, please refer to the description of the embodiment of the mobile terminal of the present invention for understanding, and therefore, for brevity, will not be described again.
Example four
In correspondence with the foregoing embodiments, this embodiment describes a shooting processing method, and referring to an alternative flow diagram of the shooting processing method shown in fig. 4-1, the shooting processing method includes the following steps:
step 101, pre-collecting an environment.
And 102, analyzing a pre-acquisition result based on the posture designated by the photographer to obtain the position of the adaptive posture in the environment and the shooting direction for shooting the corresponding posture.
And 103, performing prompt operation based on the pose of the shooting object adaptive environment and the shooting direction of the shooter adaptive environment.
And 104, when the condition that the prompt operation is met is detected, acquiring and shooting the environment.
Referring to a specific example, referring to an alternative flow diagram of the shooting processing method shown in fig. 4-2, when the user turns on a camera of the mobile terminal during shooting (step 201), the scene is pre-captured first, and a scene picture preferred by the user is captured (step 202). The mobile terminal analyzes the acquired image, analyzes the pose of the adaptive environment (step 203), and presents the pose to the user for selection. The user selects a favorite pose (step 204). The mobile terminal analyzes the best position for shooting the pose, and presents a contour map of the selected pose at the corresponding position in the shooting preview interface (step 205); shooting the object to imitate the pose, implementing the pose on the corresponding position in the environment (step 206); and (5) adjusting the focal length or the position of the mobile phone by the photographer, shooting when the pose of the shooting object is consistent with the contour adjustment in the shooting preview interface, and finishing shooting (step 207).
The present embodiment has such advantageous effects:
the satisfactory image can be rapidly shot from the environment without frequently changing the shooting position and the shooting direction; the shooting method has the advantages that the position and the posture of the shooting object in the environment do not need to be changed frequently, the best effect of being integrated into the environment can be achieved only by implementing the prompted pose, a good shooting effect can be achieved, the shooting efficiency is improved, and the human-computer interaction efficiency and the intelligent degree in the shooting process are improved.
EXAMPLE five
Corresponding to the foregoing embodiment, this embodiment describes a shooting processing method, and referring to an alternative flow diagram of the shooting processing method shown in fig. 5-1, the shooting processing method includes the following steps:
step 301, pre-collecting the environment.
Step 302, analyzing the pre-acquisition result to obtain the characteristics of the object in the environment, and searching the shooting result of the object with the characteristics in the database.
And step 303, analyzing the pose of the object in the shooting result and the shooting direction of the shooting result.
For example, computing resources of the mobile terminal can be called, the feature of the environmental object analyzed according to the pre-acquisition result is combined with a local database (storing candidate shooting results such as photos, videos and the like) of the mobile terminal, the shooting result conforming to the feature is found in the database in a feature matching mode, the shooting result is analyzed, and the pose of the object in the shooting result and the shooting direction of the shooting result formed by shooting are analyzed.
Illustratively, computing resources of the cloud can be called, the characteristics of the environmental object analyzed according to the pre-acquisition result are searched in the database of the cloud in a characteristic matching mode, the shooting result is analyzed, and the pose of the object in the shooting result and the shooting direction of the shooting result are analyzed.
And step 304, performing prompt operation based on the pose of the shooting object adaptive environment and the shooting orientation of the shooting person adaptive environment.
And 305, when the condition that the prompt operation is met is detected, acquiring and shooting the environment.
Referring to a specific example, referring to an optional flowchart of the shooting processing method shown in fig. 5-2, when a user turns on a camera (step 401) to take a picture, the user first shoots a person and a scene (step 402). The captured picture is uploaded to the server (step 403). After receiving the picture, the server performs image recognition and analysis on the picture (step 404), and recognizes character features and scene features in the picture (step 405). And comparing the characteristics of the picture with the characteristics of the pictures stored in the database of the server (step 406), and comparing to obtain a beautiful picture and a beautiful pose shot by people in the same or similar scene, and returning the beautiful picture and the beautiful pose to the user as reference. The user can select a picture most suitable for himself according to the picture returned by the server (step 408), and during formal photographing, an outline of a main scene or a main person is automatically generated in the photographing preview interface, and the user can also select the main outline (step 409). The photographer displays the main scene or person in the outline of the picture by adjusting the focal length or the device position, and finishes photographing by clicking the photographing function button (step 410).
The present embodiment has such advantageous effects:
the shooting result which can be selected by a photographer to expect imitation shooting is obtained through analysis of the pre-acquisition environment, analysis is carried out according to the shooting result selected by the photographer, the shooting result selected by the imitation shooting is determined to enable the corresponding shooting position and pose, shooting can be carried out by the photographer based on the optimal shooting position through a mode of prompting the shooting position and the shooting pose, the technical effect of imitation shooting the selected shooting result can be achieved without frequently changing the shooting position and the shooting direction, shooting efficiency is improved, and human-computer interaction efficiency and intelligent degree in the shooting process are improved.
EXAMPLE six
Corresponding to the third embodiment, this embodiment describes a shooting processing method, and referring to an optional flow diagram of the shooting processing method shown in fig. 6, the shooting processing method includes the following steps:
step 501, pre-collecting an environment.
And 502, analyzing a pre-acquisition direction based on a pre-acquisition result, and determining the pose of the shooting object adapted to the environment when the shooting person shoots in the shooting direction according to the pre-acquisition shooting direction.
Illustratively, the following are included:
1) when the pre-acquisition result analyzed by the analysis unit is acquired based on the fixed shooting orientation, the pre-acquisition result is analyzed to obtain the pose of the shooting object adapted to the environment when shooting is carried out with the fixed shooting orientation.
And 2, when the pre-acquisition result analyzed by the analysis unit is acquired based on a plurality of shooting orientations, analyzing the pre-acquisition result to obtain the pose of the shooting object adapted to the environment when shooting is carried out in each shooting orientation.
3) When the analysis unit analyzes the pre-acquisition result to obtain the best shooting orientation in the plurality of shooting orientations and the pose of the object adapted to the environment when the object is shot in the best orientation.
And 503, performing prompt operation based on the pose of the shooting object adaptive environment and the shooting direction of the photographer adaptive environment.
And step 504, when the condition that the prompt operation is met is detected, acquiring and shooting the environment.
The present embodiment has such advantageous effects:
the method comprises the steps of analyzing a shooting orientation expected by a photographer through a pre-collected environment, and determining a pose of the environment matched with the shooting orientation; by means of prompting the shooting direction and the shooting pose, when a photographer can shoot based on an expected shooting direction, the best shooting pose of a shooting object in the shooting direction can be automatically obtained, and a satisfactory image can be quickly shot for an environment without frequently changing the shooting position and the shooting direction of the shooting object; not only can obtain good shooting effect, promote shooting efficiency moreover, promoted shooting in-process human-computer interaction's efficiency and intelligent degree.
EXAMPLE seven
The present embodiment provides a storage medium, in which executable instructions are stored, and the executable instructions are used to execute the shooting processing method provided in any one of the fourth to sixth embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A mobile terminal, characterized in that the mobile terminal comprises:
the pre-acquisition unit is used for pre-acquiring the environment;
the analysis unit is used for analyzing the result of the pre-acquisition of the environment to obtain the pose of the shooting object matched with the environment and the shooting direction of the shooter matched with the environment;
the prompting unit is used for performing prompting operation based on the pose of the shooting object matched with the environment and the shooting direction of the shooter matched with the environment;
and the shooting unit is used for shooting the environment when the prompt operation is detected to be met.
2. The mobile terminal of claim 1,
the analysis unit is further configured to analyze the pre-acquisition result based on the pose specified by the photographer, obtain a position in the environment adapted to the pose, and capture a capture orientation of a corresponding pose.
3. The mobile terminal of claim 1,
the analysis unit is further used for analyzing the pre-acquisition result to obtain the characteristics of the object in the environment, and searching the shooting result of the object with the characteristics in a database; and analyzing the pose of the object in the shooting result and the shooting orientation for forming the shooting result through shooting.
4. The mobile terminal of claim 3,
the analysis unit is further used for analyzing the pose of the object in the shooting result and presenting the preview of the shooting result before shooting the shooting azimuth forming the shooting result; and analyzing the pose of the object in the selected shooting result and the shooting orientation for shooting the pose.
5. The mobile terminal of claim 1,
the analysis unit is further configured to analyze the pre-acquisition result when the pre-acquisition result is acquired based on a fixed shooting orientation, so as to obtain a pose of the shooting object adapted to the environment when shooting is performed with the fixed shooting orientation.
6. The mobile terminal of claim 1,
the analysis unit is further configured to analyze the pre-acquisition result to obtain a pose of the shooting object adapted to the environment when shooting is performed in each shooting orientation when the pre-acquisition result is acquired based on a plurality of shooting orientations; or analyzing the pre-acquisition result to obtain the optimal shooting orientation in the plurality of shooting orientations and the pose of the object adapted to the environment when the object is shot in the optimal orientation.
7. The mobile terminal of claim 1,
the prompting unit is also used for identifying the difference between the current shooting direction and the adaptive shooting direction and prompting to adjust the shooting direction and/or position; and identifying the difference between the current pose of the shooting object and the adaptive pose, and prompting to adjust the pose of the shooting object.
8. The mobile terminal of claim 7,
the prompting unit is further configured to identify a position corresponding to the adapted pose in a shooting preview interface of the environment, and present an outline of the adapted pose at a corresponding position in the shooting preview interface.
9. A shooting processing method, characterized by comprising:
pre-collecting the environment;
analyzing a result of pre-collecting the environment to obtain a pose of a shooting object adapted to the environment and a shooting orientation of a shooter adapted to the environment;
performing prompt operation based on the pose of the shooting object and the shooting direction of the shooting person;
and when the condition that the prompt operation is met is detected, shooting operation is carried out on the environment.
10. The method according to claim 9, wherein the analyzing results of pre-acquiring an environment to obtain a pose of a photographic object and a photographic orientation of a photographer adapted to the environment comprises:
and analyzing the pre-acquisition result based on the posture designated by the photographer to obtain the position matched with the posture in the environment and the shooting direction for shooting the corresponding posture.
CN201610832191.7A 2016-09-19 2016-09-19 Mobile terminal and shooting processing method Pending CN106454074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610832191.7A CN106454074A (en) 2016-09-19 2016-09-19 Mobile terminal and shooting processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610832191.7A CN106454074A (en) 2016-09-19 2016-09-19 Mobile terminal and shooting processing method

Publications (1)

Publication Number Publication Date
CN106454074A true CN106454074A (en) 2017-02-22

Family

ID=58165682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610832191.7A Pending CN106454074A (en) 2016-09-19 2016-09-19 Mobile terminal and shooting processing method

Country Status (1)

Country Link
CN (1) CN106454074A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107404419A (en) * 2017-08-01 2017-11-28 南京华苏科技有限公司 Based on the anti-false survey method and device of the network covering property of picture or video test
CN108540724A (en) * 2018-04-28 2018-09-14 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN109922271A (en) * 2019-04-18 2019-06-21 珠海格力电器股份有限公司 Mobile terminal based on folding screen and photographing method thereof
CN114095662A (en) * 2022-01-20 2022-02-25 荣耀终端有限公司 Shooting guide method and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090231457A1 (en) * 2008-03-14 2009-09-17 Samsung Electronics Co., Ltd. Method and apparatus for generating media signal by using state information
CN103929597A (en) * 2014-04-30 2014-07-16 杭州摩图科技有限公司 Shooting assisting method and device
CN104539842A (en) * 2014-12-17 2015-04-22 宇龙计算机通信科技(深圳)有限公司 Intelligent photographing method and photographing device
CN104717413A (en) * 2013-12-12 2015-06-17 北京三星通信技术研究有限公司 Shooting assistance method and equipment
CN104902172A (en) * 2015-05-19 2015-09-09 广东欧珀移动通信有限公司 A method for determining a shooting position and a shooting terminal
CN105827930A (en) * 2015-05-27 2016-08-03 广东维沃软件技术有限公司 A method and device for assisting photography

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090231457A1 (en) * 2008-03-14 2009-09-17 Samsung Electronics Co., Ltd. Method and apparatus for generating media signal by using state information
CN104717413A (en) * 2013-12-12 2015-06-17 北京三星通信技术研究有限公司 Shooting assistance method and equipment
CN103929597A (en) * 2014-04-30 2014-07-16 杭州摩图科技有限公司 Shooting assisting method and device
CN104539842A (en) * 2014-12-17 2015-04-22 宇龙计算机通信科技(深圳)有限公司 Intelligent photographing method and photographing device
CN104902172A (en) * 2015-05-19 2015-09-09 广东欧珀移动通信有限公司 A method for determining a shooting position and a shooting terminal
CN105827930A (en) * 2015-05-27 2016-08-03 广东维沃软件技术有限公司 A method and device for assisting photography

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107404419A (en) * 2017-08-01 2017-11-28 南京华苏科技有限公司 Based on the anti-false survey method and device of the network covering property of picture or video test
CN107404419B (en) * 2017-08-01 2020-09-01 南京华苏科技有限公司 Network coverage performance test anti-false test method and device based on picture or video
CN108540724A (en) * 2018-04-28 2018-09-14 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN109922271A (en) * 2019-04-18 2019-06-21 珠海格力电器股份有限公司 Mobile terminal based on folding screen and photographing method thereof
CN114095662A (en) * 2022-01-20 2022-02-25 荣耀终端有限公司 Shooting guide method and electronic equipment

Similar Documents

Publication Publication Date Title
CN106454121B (en) Double-camera shooting method and device
CN106909274B (en) Image display method and device
CN105100491B (en) A kind of apparatus and method for handling photo
CN105468158B (en) Color adjustment method and mobile terminal
CN106713716B (en) Shooting control method and device for double cameras
CN106412255B (en) Terminal and display methods
CN106303273B (en) A kind of mobile terminal and its camera control method
CN106657782B (en) Picture processing method and terminal
CN106547439B (en) Method and device for processing message
CN105554386A (en) Mobile terminal and camera shooting control method thereof
CN105430258B (en) A kind of method and apparatus of self-timer group photo
CN106911881B (en) Dynamic photo shooting device and method based on double cameras and terminal
CN105827866A (en) Mobile terminal and control method
CN106851113A (en) A kind of photographic method and mobile terminal based on dual camera
CN104917965A (en) Shooting method and device
CN105100619A (en) Apparatus and method for adjusting shooting parameters
CN106454074A (en) Mobile terminal and shooting processing method
CN106373110A (en) Method and device for image fusion
CN106993134B (en) Image generation device and method and terminal
CN107018326B (en) Shooting method and device
CN106331482A (en) Photo processing device and method
CN106851114B (en) Photo display device, photo generation device, photo display method, photo generation method and terminal
CN105338244B (en) A kind of information processing method and mobile terminal
CN105262953B (en) A kind of mobile terminal and its method of control shooting
CN105744508B (en) Game data backup method and mobile terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170222