CN107613550B - Unlocking control method and related product - Google Patents
Unlocking control method and related product Download PDFInfo
- Publication number
- CN107613550B CN107613550B CN201710891744.0A CN201710891744A CN107613550B CN 107613550 B CN107613550 B CN 107613550B CN 201710891744 A CN201710891744 A CN 201710891744A CN 107613550 B CN107613550 B CN 107613550B
- Authority
- CN
- China
- Prior art keywords
- preset
- face
- face recognition
- template
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Telephone Function (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention discloses an unlocking control method and a related product, wherein the method comprises the following steps: acquiring an environmental parameter; detecting whether the environmental parameters meet preset conditions or not; when the environmental parameters do not meet the preset conditions, a face recognition mode is not started; and when the environmental parameters meet the preset conditions, starting the face recognition mode. The embodiment of the invention can intelligently use the face recognition function and reduce the power consumption of the mobile terminal.
Description
Technical Field
The invention relates to the technical field of mobile terminals, in particular to an unlocking control method and a related product.
Background
With the widespread application of mobile terminals (mobile phones, tablet computers, etc.), the applications that the mobile terminals can support are increasing, the functions are increasing, and the mobile terminals are developing towards diversification and individuation, and become indispensable electronic products in the life of users.
At present, face recognition is more and more favored by mobile terminal manufacturers, in the face recognition process, a camera is started to grab a picture and perform face recognition, and hardware and graphic calculation are often very power-consuming, so that how to intelligently use the face recognition function and reduce the power consumption of the mobile terminal needs to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides an unlocking control method and a related product, which can intelligently use a face recognition function and reduce the power consumption of a mobile terminal.
In a first aspect, an embodiment of the present invention provides a mobile terminal, including: an Application Processor (AP), and an environment sensor and a face recognition device connected to the AP, wherein,
the environment sensor is used for acquiring environment parameters;
the AP is used for detecting whether the environmental parameters meet preset conditions or not; when the environmental parameters do not meet the preset conditions, controlling the face recognition device not to start a face recognition mode; and controlling the face recognition device to start the face recognition mode when the environmental parameters meet the preset conditions.
In a second aspect, an embodiment of the present invention provides an unlocking control method, which is applied to a mobile terminal including an application processor AP, and an environment sensor and a face recognition device connected to the AP, where the method includes:
the environment sensor acquires an environment parameter;
the AP detects whether the environmental parameters meet preset conditions or not; when the environmental parameters do not meet the preset conditions, controlling the face recognition device not to start a face recognition mode; and controlling the face recognition device to start the face recognition mode when the environmental parameters meet the preset conditions.
In a third aspect, an embodiment of the present invention provides an unlocking control method, including:
acquiring an environmental parameter;
detecting whether the environmental parameters meet preset conditions or not;
when the environmental parameters do not meet the preset conditions, a face recognition mode is not started;
and when the environmental parameters meet the preset conditions, starting the face recognition mode.
In a fourth aspect, an embodiment of the present invention provides an unlocking control apparatus, including:
a first acquisition unit for acquiring an environmental parameter;
the detection unit is used for detecting whether the environmental parameters meet preset conditions or not;
the execution unit is used for not starting a face recognition mode when the environmental parameters do not meet the preset conditions; and when the environmental parameters meet the preset conditions, starting the face recognition mode.
In a fifth aspect, an embodiment of the present invention provides a mobile terminal, including: an application processor AP and a memory; and one or more programs stored in the memory and configured to be executed by the AP, the programs including instructions for some or all of the steps as described in the third aspect.
In a sixth aspect, the present invention provides a computer-readable storage medium, where the computer-readable storage medium is used for storing a computer program, where the computer program is used to make a computer execute some or all of the steps described in the third aspect of the present invention.
In a seventh aspect, embodiments of the present invention provide a computer program product, where the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in the third aspect of embodiments of the present invention. The computer program product may be a software installation package.
The embodiment of the invention has the following beneficial effects:
it can be seen that the unlocking control method and the related product described in the embodiments of the present invention can obtain the environmental parameters, detect whether the environmental parameters satisfy the preset conditions, and when the environmental parameters do not satisfy the preset conditions, do not start the face recognition mode, and when the environmental parameters satisfy the preset conditions, start the face recognition mode.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic diagram of an architecture of an exemplary mobile terminal according to an embodiment of the present invention;
fig. 1B is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 1C is another schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 1D is a schematic flowchart of an unlocking control method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another unlocking control method disclosed in the embodiment of the present invention;
fig. 3 is another schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 4A is a schematic structural diagram of an unlocking control device according to an embodiment of the present invention;
fig. 4B is another schematic structural diagram of an unlocking control device according to an embodiment of the present invention;
fig. 4C is another schematic structural diagram of an unlocking control device according to an embodiment of the present invention;
fig. 4D is another schematic structural diagram of an unlocking control device according to an embodiment of the present invention;
fig. 4E is another schematic structural diagram of an unlocking control device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another mobile terminal disclosed in the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The Mobile terminal according to the embodiment of the present invention may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like, which have wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as a mobile terminal.
The following describes embodiments of the present invention in detail. As an example mobile terminal 1000 shown in fig. 1A, the face recognition device of the mobile terminal 1000 can be a camera 21, where 21 can be a single camera or dual cameras, where the dual cameras can be one visible camera, one infrared camera, or both visible cameras. The camera 21 may be a front camera or a rear camera.
Referring to fig. 1B, fig. 1B is a schematic structural diagram of a mobile terminal 100, where the mobile terminal 100 includes: the AP110 is connected to the face recognition device 120 and the environmental sensor 130 through a bus 150, and further, referring to fig. 1C, fig. 1C is a modified structure of the mobile terminal 100 depicted in fig. 1B, and with respect to fig. 1B, fig. 1C further includes a vibration sensor 140, a motion sensor 160 and a distance sensor 170, which are all connected to the AP110 through the bus 150. The environmental sensor 130 described above may be used to detect an environmental parameter, and the environmental sensor 130 may be at least one of: breath detection sensors, ambient light sensors, electromagnetic detection sensors, ambient color temperature detection sensors, positioning sensors, temperature sensors, humidity sensors, and the like, the environmental parameter may be at least one of: breathing parameters, ambient brightness, ambient color temperature, ambient magnetic field interference factor, weather conditions, number of ambient light sources, geographic location, etc., the breathing parameters may be at least one of: number of breaths, breathing rate, breathing sounds, breathing curves, etc.
The mobile terminal described based on fig. 1A-1C can be used to implement the following functions:
the environment sensor 130 is used for acquiring environment parameters;
the AP110 is configured to detect whether the environmental parameter meets a preset condition; when the environmental parameter does not meet the preset condition, controlling the face recognition device 120 not to start a face recognition mode; and controlling the face recognition device 120 to start the face recognition mode when the environmental parameter meets the preset condition.
In one possible example, a vibration sensor 140 for acquiring a vibration frequency of the mobile terminal;
after the environmental parameter satisfies the preset condition, the AP110 is further specifically configured to:
when the vibration frequency is less than a preset vibration frequency, executing the step of starting the face recognition mode;
and when the vibration frequency is greater than or equal to the preset vibration frequency, executing the step of not starting the face recognition mode.
In one possible example, a distance sensor 170 for determining a distance between a human face and the mobile terminal;
after the environmental parameter satisfies the preset condition, the AP110 is further specifically configured to:
and when the distance is smaller than a preset distance threshold value, executing the step of not starting the face recognition mode.
And when the distance is greater than or equal to the preset distance threshold, executing the step of starting the face recognition mode.
In one possible example, the AP110 is further specifically configured to:
acquiring the current electric quantity of the mobile terminal;
after the environmental parameter satisfies the preset condition, the AP110 is further specifically configured to:
when the current electric quantity is larger than or equal to a preset electric quantity threshold value, executing the step of starting the face recognition mode;
and when the current electric quantity is smaller than the preset electric quantity threshold value, executing the step of not starting the face recognition mode.
In one possible example, after the aspect of initiating the face recognition mode, the AP110 is further specifically configured to:
acquiring shooting parameters corresponding to the environment parameters;
the face recognition device 120 is configured to perform shooting according to the shooting parameters to obtain a face image;
the AP110 is further specifically configured to:
matching the face image with a preset face template; and executing unlocking operation when the face image is successfully matched with the preset face template.
The mobile terminal described in fig. 1A-1C may be used to implement an unlocking control method, where the method includes:
the environmental sensor 130 acquires environmental parameters;
the AP110 detects whether the environmental parameter satisfies a preset condition; when the environmental parameter does not meet the preset condition, controlling the face recognition device 120 not to start a face recognition mode; and controlling the face recognition device 120 to start the face recognition mode when the environmental parameter meets the preset condition.
It can be seen that, the unlocking control method described in the embodiment of the present invention may obtain the environmental parameters, detect whether the environmental parameters satisfy the preset conditions, and when the environmental parameters do not satisfy the preset conditions, do not start the face recognition mode, and when the environmental parameters satisfy the preset conditions, start the face recognition mode, thus, it is determined whether to start the face recognition mode properly through the environmental parameters, and may not start the face recognition mode in some unsuitable scenes, thereby implementing intelligent use of the face recognition function, and thus, reducing the power consumption of the mobile terminal.
Referring to fig. 1D, a schematic flowchart of an embodiment of an unlocking control method according to an embodiment of the present invention is shown based on the mobile terminal described in fig. 1A to fig. 1C. The unlocking control method described in this embodiment may include the following steps:
101. and acquiring the environmental parameters.
Wherein, the environmental parameter may be at least one of the following: ambient brightness, ambient color temperature, ambient magnetic field interference coefficient, breathing parameters, weather conditions, number of ambient light sources, geographical location, etc., the breathing parameters may be at least one of: number of breaths, breathing rate, breathing sounds, breathing curves, etc. The environmental parameter may be collected by an environmental sensor, which may be at least one of: ambient light sensor, breathe and detect sensor, electromagnetic detection sensor, environment colour temperature detection sensor, positioning sensor, temperature sensor, humidity transducer etc. for example, ambient light sensor can be used for gathering ambient brightness, and positioning sensor can acquire the current position, and electromagnetic detection sensor can be used for detecting magnetic field interference intensity, if disturb too strong, then can lead to image acquisition unsuccessful.
Alternatively, the step 101 may be implemented as follows:
and acquiring the environmental parameters in the screen-off state or acquiring the environmental parameters in the screen-on state.
102. And detecting whether the environmental parameters meet preset conditions.
The preset condition may be set by the user, or may be set by default by the system. The preset condition may be defined for at least one of the following environments: ambient brightness, magnetic field strength, geographical location, breathing phenomena (e.g. in such a way that on the one hand living objects can be detected and on the other hand whether the user is beside the mobile terminal), temperature, humidity, ambient color temperature, etc. For example, the preset condition may be that the ambient brightness is in a first preset range, the first preset range may be set by the user, or the system defaults, and for example, the preset condition may be that the magnetic field strength is in a second preset range, the second preset range may be set by the user, or the system defaults, and for example, the preset condition may be that the ambient brightness is in the first preset range and the magnetic field strength is in the second preset range, and the like, and specifically, the preset condition is determined according to actual situations.
Optionally, the preset condition may correspond to a season, or may also correspond to a habit of the user, for example, the preset condition in spring may be different from the preset condition in summer throughout the year, for example, the habit of life in different stages may be different, and the preset condition may also be different, and may be improved by artificial intelligence to better suit the habit of life of the user.
For example, when the ambient brightness is low, the face recognition may not be started, and in a dark environment, the face recognition mode is not started because the sharpness of the acquired face image is low.
For another example, when the number of the ambient light sources is large, the light may be disturbed, and the quality of the acquired image may not be clear, so that the face recognition mode is not started.
103. And when the environmental parameters do not meet the preset conditions, not starting a face recognition mode.
When the environmental parameters do not meet the preset conditions, the face recognition mode is not started, so that the power consumption of the mobile terminal can be reduced.
104. And when the environmental parameters meet the preset conditions, starting the face recognition mode.
When the environmental parameters meet the preset conditions, the face recognition mode can be started, and at the moment, the face recognition accuracy can be guaranteed within a certain range, so that the face recognition efficiency is improved.
Optionally, before step 101, or between step 101 and step 102, or when the environmental parameter satisfies the preset condition, the following steps may be included:
acquiring the vibration frequency of the mobile terminal;
then, after the environmental parameter meets the preset condition, the following steps may be further included:
when the vibration frequency is less than a preset vibration frequency, executing the step of starting the face recognition mode; and when the vibration frequency is greater than or equal to a preset vibration frequency, executing the step of not starting the face recognition mode.
The vibration frequency of the mobile terminal can be detected through the vibration sensor, and if the vibration sensor vibrates, whether the user is in a motion state is reflected to a certain extent, if the user is in the motion state, the mobile terminal shakes, and a shot image is blurred, so that face recognition failure is caused. The preset vibration frequency can be set by the user or the default of the system. Therefore, the step of starting the face recognition mode is performed when the vibration frequency is less than the preset vibration frequency, and the step of not starting the face recognition mode is performed when the vibration frequency is greater than or equal to the preset vibration frequency.
Optionally, before step 101, or between step 101 and step 102, or when the environmental parameter satisfies the preset condition, the following steps may be included:
a1, determining the distance between the human face and the mobile terminal;
then, after the environmental parameter meets the preset condition, the following steps may be further included:
a2, when the distance is smaller than a preset distance threshold, executing the step of not starting the face recognition mode.
A3, when the distance is larger than or equal to the preset distance threshold, executing the step of starting the face recognition mode.
Wherein, can detect the distance between people's face and the mobile terminal through distance sensor, under the normal condition, if people's face is nearer with the camera, then the face of gathering is incomplete, then to a great extent can lead to face identification failure, has reduced face identification efficiency. The preset distance threshold may be set by the user or default by the system. And when the distance is smaller than a preset distance threshold value, executing a step of not starting the face recognition mode. And executing the step of starting the face recognition mode when the distance is greater than or equal to the preset distance threshold.
Optionally, before step 101, or between step 101 and step 102, or when the environmental parameter satisfies the preset condition, the following steps may be included:
acquiring the current electric quantity of the mobile terminal;
then, after the environmental parameter meets the preset condition, the following steps may be further included:
when the vibration frequency is greater than or equal to a preset electric quantity threshold value, executing the step of starting the face recognition mode; and when the vibration frequency is smaller than the preset electric quantity threshold value, executing the step of not starting the face recognition mode.
The current electric quantity of the mobile terminal can be detected, the preset electric quantity threshold value can be set by a user or the system is defaulted, when the electric quantity is low, the camera is started to perform face recognition, shutdown is possible, certainly, the user needs to reserve the electric quantity for other matters, and therefore under the condition, the face recognition mode is not started when a certain electric quantity is used.
It can be seen that, the unlocking control method described in the embodiment of the present invention may obtain the environmental parameters, detect whether the environmental parameters satisfy the preset conditions, and when the environmental parameters do not satisfy the preset conditions, do not start the face recognition mode, and when the environmental parameters satisfy the preset conditions, start the face recognition mode, thus, it is determined whether to start the face recognition mode properly through the environmental parameters, and may not start the face recognition mode in some unsuitable scenes, thereby implementing intelligent use of the face recognition function, and thus, reducing the power consumption of the mobile terminal.
In accordance with the above, please refer to fig. 2, which is a flowchart illustrating an embodiment of an unlocking control method according to an embodiment of the present invention. The unlocking control method described in this embodiment may include the following steps:
201. and acquiring the environmental parameters.
202. And detecting whether the environmental parameters meet preset conditions.
203. And when the environmental parameters meet the preset conditions, starting the face recognition mode.
The specific description of the steps 201 to 203 may refer to the corresponding steps of the unlocking control method described in fig. 1D, and will not be described herein again.
204. And acquiring shooting parameters corresponding to the environment parameters, and shooting according to the shooting parameters to obtain a face image.
The mobile terminal may pre-store a corresponding relationship between the shooting parameters and the environment parameters, and then determine the shooting parameters corresponding to the environment parameters according to the corresponding relationship, where the shooting parameters may include but are not limited to: exposure time, aperture size, photographing mode, sensitivity ISO, white balance parameters, and the like.
205. And matching the face image with a preset face template.
The preset face template may be pre-stored in a memory, and may be entered before the step 201 is executed. Furthermore, the face image can be matched with a preset face template, when the matching value between the face image and the preset face template is greater than a preset identification threshold value, unlocking operation is executed, and when the matching value between the face image and the preset face template is less than or equal to the preset identification threshold value, a user is prompted that face identification fails.
Optionally, in the step 205, matching the face image with a preset face template may include the following steps:
51. determining a face angle of the face image;
52. determining a preset face template corresponding to the face angle;
53. taking a region with definition meeting a preset requirement in the face image as a clearest region, and extracting feature points of the clearest region to obtain a first feature point set;
54. extracting the peripheral outline of the face image to obtain a first outline;
55. matching the first contour with a second contour of the preset face template, and matching the first feature point set with the preset face template;
56. when the first contour is successfully matched with the second contour of the preset face template and the first feature point set is successfully matched with the preset face template, the matching is confirmed to be successful; and confirming that the matching fails when the first contour fails to be matched with the second contour of the preset face template, or when the first feature point set fails to be matched with the preset face template.
In the embodiment of the invention, a plurality of face templates can be stored in advance in a mobile terminal, each face template corresponds to an angle or an angle range, namely different face templates are adopted, after all, if a person is, a front face, a side face, obtained feature points and other features are different, therefore, in the application, the face angle of a face image is determined first, a preset face template corresponding to the face angle is selected, a clearest region is selected from the face image, if the clearest region is, the collected features are complete, therefore, the face recognition efficiency is favorably improved, on the other hand, because the clearest region is only a partial region and accidental matching possibly exists, or the recognition regions are few, the face image is subjected to contour extraction to obtain a first contour, and in the matching process, the feature points of the clearest region are matched with the preset face template, meanwhile, the first contour is matched with a preset face template, when the first contour and the preset face template are matched, the matching is confirmed to be successful, if any one of the first contour and the preset face template fails to be matched, the matching fails, and therefore the matching speed and the matching safety are guaranteed while the success rate is guaranteed.
Optionally, the definition may also be defined by the number of feature points, after all, the clearer the image, the more feature points it contains, and then the preset requirement is: if the number of feature points is greater than a preset number threshold, which may be set by the user or default by the system, step 53 may be implemented as follows: and determining the region with the number of the feature points larger than a preset number threshold value in the face region as the clearest region.
Optionally, the definition may be calculated by a specific formula, which is introduced in the related art and is not described herein, and then the preset requirement is: the sharpness value is greater than a preset sharpness threshold, which may be set by the user or default by the system, then step 53 may be implemented as follows: and determining the region with the definition value larger than a preset definition threshold value in the face region as the clearest region.
In addition, the above feature extraction can be implemented by the following algorithm: a Harris corner detection algorithm, Scale Invariant Feature Transform (SIFT), SUSAN corner detection algorithm, etc., which are not described herein again. The contour extraction in step 54 may be the following algorithm: hough transform, haar or canny, etc.
Optionally, between the step 204 and the step 205, the following steps may be further included:
and carrying out image enhancement processing on the face image, and matching the face image after the image enhancement processing with a preset face template.
Among them, the image enhancement processing may include, but is not limited to: image denoising (e.g., wavelet transform for image denoising), image restoration (e.g., wiener filtering), dark vision enhancement algorithms (e.g., histogram equalization, gray scale stretching, etc.), and after image enhancement processing is performed on the face image, the quality of the face image can be improved to some extent.
Optionally, between the step 204 and the step 205, the following steps may be further included:
b1, carrying out image quality evaluation on the face image to obtain an image quality evaluation value;
b2, when the image quality evaluation value is lower than a preset quality threshold, performing image enhancement processing on the face image.
The preset quality threshold value can be set by a user or defaulted by a system, image quality evaluation can be carried out on the face image to obtain an image quality evaluation value, whether the quality of the face image is good or bad is judged through the image quality evaluation value, when the image quality evaluation value is larger than or equal to the preset quality threshold value, the face image quality can be considered to be good, when the image quality evaluation value is smaller than the preset quality threshold value, the face image quality can be considered to be poor, and then image enhancement processing can be carried out on the face image.
In step B1, the image quality of the face image may be evaluated by using at least one image quality evaluation index, so as to obtain an image quality evaluation value.
In the specific matching, when the face image is evaluated, a plurality of image quality evaluation indexes are included, and each image quality evaluation index also corresponds to one weight, so that when each image quality evaluation index evaluates the image quality, an evaluation result can be obtained, and finally, weighting operation is performed, so that a final image quality evaluation value is obtained. The image quality evaluation index may include, but is not limited to: mean, standard deviation, entropy, sharpness, signal-to-noise ratio, etc.
It should be noted that, since there is a certain limitation in evaluating image quality by using a single evaluation index, it is possible to evaluate image quality by using a plurality of image quality evaluation indexes, and certainly, when evaluating image quality, it is not preferable that the image quality evaluation indexes are more, because the image quality evaluation indexes are more, the calculation complexity of the image quality evaluation process is higher, and the image quality evaluation effect is not better, and therefore, in a case where the image quality evaluation requirement is higher, it is possible to evaluate image quality by using 2 to 10 image quality evaluation indexes. Specifically, the number of image quality evaluation indexes and which index is selected is determined according to the specific implementation situation. Of course, the image quality evaluation index selected in combination with the specific scene selection image quality evaluation index may be different between the image quality evaluation performed in the dark environment and the image quality evaluation performed in the bright environment.
Alternatively, in the case where the requirement on the accuracy of the image quality evaluation is not high, the evaluation may be performed by using one image quality evaluation index, for example, the image quality evaluation value may be performed on the image to be processed by using entropy, and it may be considered that the larger the entropy, the better the image quality is, and conversely, the smaller the entropy, the worse the image quality is.
Alternatively, when the requirement on the image quality evaluation accuracy is high, the image may be evaluated by using a plurality of image quality evaluation indexes, and when the image quality evaluation is performed by using a plurality of image quality evaluation indexes, a weight of each of the plurality of image quality evaluation indexes may be set, so that a plurality of image quality evaluation values may be obtained, and a final image quality evaluation value may be obtained according to the plurality of image quality evaluation values and their corresponding weights, for example, three image quality evaluation indexes are: when an image quality evaluation is performed on a certain image by using A, B and C, the image quality evaluation value corresponding to a is B1, the image quality evaluation value corresponding to B is B2, and the image quality evaluation value corresponding to C is B3, the final image quality evaluation value is a1B1+ a2B2+ a3B 3. In general, the larger the image quality evaluation value, the better the image quality.
206. And when the face image is successfully matched with the preset face template, executing unlocking operation.
And when the face image is successfully matched with the preset face template, executing unlocking operation. When the matching of the face image and the preset face template fails, the user can be prompted that the face recognition fails.
Optionally, the above-mentioned executing the unlocking operation may be at least one of the following cases: for example, when the mobile terminal is in a screen-off state, the unlocking operation may be to light up a screen and enter a main page of the mobile terminal, or to designate a page; when the mobile terminal is in a bright screen state, the unlocking operation can be entering a main page of the mobile terminal or an appointed page; for example, the mobile terminal may be on a payment page, and the unlocking operation may be to perform payment. The designated page may be at least one of: a page of an application, or a page specified by the user.
It can be seen that the unlocking control method described in the embodiments of the present invention can obtain the environmental parameters, detect whether the environmental parameters satisfy the preset conditions, and when the environmental parameters do not satisfy the preset conditions, do not start the face recognition mode, when the environmental parameters satisfy the preset conditions, start the face recognition mode, obtain the shooting parameters corresponding to the environmental parameters, and shoot according to the shooting parameters to obtain the face image, match the face image with the preset face template, when the face image is successfully matched with the preset face template, execute the unlocking operation, thus it can be seen that, judging whether the face recognition mode is properly started according to the environmental parameters, can not start the face recognition mode in some improper scenes, and after the face recognition mode is started, can obtain the shooting parameters suitable for the environment, and shoot, thereby improving the image quality, the success rate of face recognition is improved, the face recognition function is intelligently used, and therefore the power consumption of the mobile terminal is reduced.
Referring to fig. 3, fig. 3 is a mobile terminal according to an embodiment of the present invention, including: an application processor AP and a memory; and one or more programs stored in the memory and configured for execution by the AP, the programs including instructions for performing the steps of:
acquiring an environmental parameter;
detecting whether the environmental parameters meet preset conditions or not;
when the environmental parameters do not meet the preset conditions, a face recognition mode is not started;
and when the environmental parameters meet the preset conditions, starting the face recognition mode.
In one possible example, the program further comprises instructions for performing the steps of:
acquiring the vibration frequency of the mobile terminal;
after the environmental parameter satisfies the preset condition, the program further includes instructions for:
when the vibration frequency is less than a preset vibration frequency, executing the step of starting the face recognition mode;
when the vibration frequency is greater than or equal to a preset vibration frequency, executing the step of not starting the face recognition mode;
in one possible example, the program further comprises instructions for performing the steps of:
determining the distance between the face and the mobile terminal;
after the environmental parameter satisfies the preset condition, the program further includes instructions for:
and when the distance is smaller than a preset distance threshold value, executing the step of not starting the face recognition mode.
And when the distance is greater than or equal to the preset distance threshold, executing the step of starting the face recognition mode.
In one possible example, after the environmental parameter satisfies the preset condition, the program further includes instructions for performing the steps of:
acquiring the current electric quantity of the mobile terminal;
after the environmental parameter satisfies the preset condition, the program further includes instructions for:
when the vibration frequency is greater than or equal to a preset electric quantity threshold value, executing the step of starting the face recognition mode;
when the vibration frequency is smaller than the preset electric quantity threshold value, executing the step of not starting the face recognition mode;
in one possible example, after the initiating the face recognition mode, the program further includes instructions for performing the steps of:
acquiring shooting parameters corresponding to the environment parameters, and shooting according to the shooting parameters to obtain a face image;
matching the face image with a preset face template;
and when the face image is successfully matched with the preset face template, executing unlocking operation.
Referring to fig. 4A, fig. 4A is a schematic structural diagram of an unlocking control device according to the present embodiment. The unlocking control device is applied to a mobile terminal and comprises a first acquisition unit 401, a detection unit 402 and an execution unit 403, wherein,
a first obtaining unit 401, configured to obtain an environmental parameter;
a detecting unit 402, configured to detect whether the environmental parameter meets a preset condition;
an execution unit 403, configured to not start a face recognition mode when the environmental parameter does not satisfy the preset condition; and when the environmental parameters meet the preset conditions, starting the face recognition mode.
Alternatively, as shown in fig. 4B, fig. 4B is a modified structure of the unlocking control device depicted in fig. 4A, which may further include, compared with fig. 4A: the second obtaining unit 404 is specifically as follows:
a second obtaining unit 404, which obtains the vibration frequency of the mobile terminal;
after the environmental parameter meets the preset condition, the execution unit 403 is specifically configured to: when the vibration frequency is less than a preset vibration frequency, executing the step of starting the face recognition mode; and when the vibration frequency is greater than or equal to the preset vibration frequency, executing the step of not starting the face recognition mode.
Optionally, after the environmental parameter satisfies the preset condition, as shown in fig. 4C, fig. 4C is a modified structure of the unlocking control device depicted in fig. 4A, which may further include, compared with fig. 4A: the determining unit 405 specifically includes:
the determining unit 405 is configured to determine a distance between a human face and the mobile terminal;
the execution unit 403 is specifically configured to:
and when the distance is smaller than a preset distance threshold value, executing the step of not starting the face recognition mode.
And when the distance is greater than or equal to the preset distance threshold, executing the step of starting the face recognition mode.
Alternatively, as shown in fig. 4D, fig. 4D is a modified structure of the unlocking control device depicted in fig. 4A, which may further include, compared with fig. 4A: the third obtaining unit 406 specifically includes:
the third obtaining unit 406 is configured to obtain a current electric quantity of the mobile terminal;
after the environmental parameter meets the preset condition, the execution unit 403 is specifically configured to:
when the current electric quantity is larger than or equal to a preset electric quantity threshold value, executing the step of starting the face recognition mode; and when the current electric quantity is smaller than the preset electric quantity threshold value, executing the step of not starting the face recognition mode.
Alternatively, as shown in fig. 4E, fig. 4E is a modified structure of the unlocking control device depicted in fig. 4A, which may further include, compared with fig. 4A: the fourth obtaining unit 407, the matching unit 408 and the unlocking unit 409 are specifically as follows:
a fourth obtaining unit 407, configured to obtain a shooting parameter corresponding to the environment parameter, and perform shooting according to the shooting parameter to obtain a face image;
a matching unit 408, configured to match the face image with a preset face template;
and the unlocking unit 409 is used for executing unlocking operation when the face image is successfully matched with the preset face template.
It can be seen that, the unlocking control device described in the embodiment of the present invention may acquire an environmental parameter, detect whether the environmental parameter meets a preset condition, and when the environmental parameter does not meet the preset condition, not start the face recognition mode, and when the environmental parameter meets the preset condition, start the face recognition mode, thus, it is determined whether the face recognition mode is properly started through the environmental parameter, and the face recognition mode may not be started in some unsuitable scenes, so as to implement an intelligent use of the face recognition function, thereby reducing the power consumption of the mobile terminal.
It can be understood that the functions of each program module of the unlocking control device in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.
As shown in fig. 5, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part in the embodiment of the present invention. The mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and the like, taking the mobile terminal as the mobile phone as an example:
fig. 5 is a block diagram illustrating a partial structure of a mobile phone related to a mobile terminal according to an embodiment of the present invention. Referring to fig. 5, the handset includes: radio Frequency (RF) circuit 910, memory 920, input unit 930, sensor 950, audio circuit 960, Wireless Fidelity (WiFi) module 970, application processor AP980, and power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 5 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 5:
the input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 930 may include a touch display 933, a face recognition device 931, and other input devices 932. The specific structure and composition of the face recognition device 931 can refer to the above description, and will not be described in detail herein. The input unit 930 may also include other input devices 932. In particular, other input devices 932 may include, but are not limited to, one or more of physical keys, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Wherein, the AP980 is configured to perform the following steps:
acquiring an environmental parameter;
detecting whether the environmental parameters meet preset conditions or not;
when the environmental parameters do not meet the preset conditions, a face recognition mode is not started;
and when the environmental parameters meet the preset conditions, starting the face recognition mode.
The AP980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions and processes of the mobile phone by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. Optionally, the AP980 may include one or more processing units, which may be artificial intelligence chips, quantum chips; preferably, the AP980 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the AP 980.
Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The handset may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the touch display screen according to the brightness of ambient light, and the proximity sensor may turn off the touch display screen and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 5 shows the WiFi module 970, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The handset also includes a power supply 990 (e.g., a battery) for supplying power to the various components, and preferably, the power supply may be logically connected to the AP980 via a power management system, so that functions such as managing charging, discharging, and power consumption may be performed via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments shown in fig. 1D or fig. 2, the method flows of the steps may be implemented based on the structure of the mobile phone.
In the embodiments shown in fig. 3 and fig. 4A to fig. 4E, the functions of the units may be implemented based on the structure of the mobile phone.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the unlocking control methods described in the above method embodiments.
Embodiments of the present invention also provide a computer program product including a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the unlock control methods as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (12)
1. A mobile terminal, comprising: an application processor AP, and an environment sensor and a face recognition device connected with the AP, wherein,
the environment sensor is used for acquiring environment parameters;
the AP is used for detecting whether the environmental parameters meet preset conditions or not; when the environmental parameters do not meet the preset conditions, controlling the face recognition device not to start a face recognition mode; when the environmental parameters meet the preset conditions, controlling the face recognition device to start the face recognition mode, acquiring shooting parameters corresponding to the environmental parameters, shooting according to the shooting parameters to obtain a face image, matching the face image with a preset face template, and executing unlocking operation when the face image is successfully matched with the preset face template;
in the aspect of matching the face image with a preset face template, the AP is specifically configured to:
determining a face angle of the face image;
determining a preset face template corresponding to the face angle;
taking a region with definition meeting a preset requirement in the face image as a clearest region, and extracting feature points of the clearest region to obtain a first feature point set;
extracting the peripheral outline of the face image to obtain a first outline;
matching the first contour with a second contour of the preset face template, and matching the first feature point set with the preset face template;
when the first contour is successfully matched with the second contour of the preset face template and the first feature point set is successfully matched with the preset face template, the matching is confirmed to be successful; and confirming that the matching fails when the first contour fails to be matched with the second contour of the preset face template, or when the first feature point set fails to be matched with the preset face template.
2. The mobile terminal of claim 1, wherein the mobile terminal further comprises:
the vibration sensor is used for acquiring the vibration frequency of the mobile terminal;
after the environmental parameter satisfies the preset condition, the AP is further specifically configured to:
when the vibration frequency is less than a preset vibration frequency, executing the step of starting the face recognition mode;
and when the vibration frequency is greater than or equal to the preset vibration frequency, executing the step of not starting the face recognition mode.
3. The mobile terminal of claim 1, wherein the mobile terminal further comprises:
the distance sensor is used for determining the distance between the face and the mobile terminal;
after the environmental parameter satisfies the preset condition, the AP is further specifically configured to:
when the distance is smaller than a preset distance threshold value, executing the step of not starting the face recognition mode;
and when the distance is greater than or equal to the preset distance threshold, executing the step of starting the face recognition mode.
4. The mobile terminal of claim 1, wherein the AP is further specifically configured to:
acquiring the current electric quantity of the mobile terminal;
after the environmental parameter satisfies the preset condition, the AP is further specifically configured to:
when the current electric quantity is larger than or equal to a preset electric quantity threshold value, executing the step of starting the face recognition mode;
and when the current electric quantity is smaller than the preset electric quantity threshold value, executing the step of not starting the face recognition mode.
5. An unlocking control method is applied to a mobile terminal comprising an application processor AP, an environment sensor and a face recognition device which are connected with the AP, wherein the method comprises the following steps:
the environment sensor acquires an environment parameter;
the AP detects whether the environmental parameters meet preset conditions or not; when the environmental parameters do not meet the preset conditions, controlling the face recognition device not to start a face recognition mode; when the environmental parameters meet the preset conditions, controlling the face recognition device to start the face recognition mode, acquiring shooting parameters corresponding to the environmental parameters, shooting according to the shooting parameters to obtain a face image, matching the face image with a preset face template, and executing unlocking operation when the face image is successfully matched with the preset face template;
wherein, the matching of the face image and a preset face template comprises:
determining a face angle of the face image;
determining a preset face template corresponding to the face angle;
taking a region with definition meeting a preset requirement in the face image as a clearest region, and extracting feature points of the clearest region to obtain a first feature point set;
extracting the peripheral outline of the face image to obtain a first outline;
matching the first contour with a second contour of the preset face template, and matching the first feature point set with the preset face template;
when the first contour is successfully matched with the second contour of the preset face template and the first feature point set is successfully matched with the preset face template, the matching is confirmed to be successful; and confirming that the matching fails when the first contour fails to be matched with the second contour of the preset face template, or when the first feature point set fails to be matched with the preset face template.
6. An unlock control method, characterized by comprising:
acquiring an environmental parameter;
detecting whether the environmental parameters meet preset conditions or not;
when the environmental parameters do not meet the preset conditions, a face recognition mode is not started;
when the environmental parameters meet the preset conditions, starting the face recognition mode, acquiring shooting parameters corresponding to the environmental parameters, shooting according to the shooting parameters to obtain a face image, matching the face image with a preset face template, and executing unlocking operation when the face image is successfully matched with the preset face template;
wherein, the matching of the face image and a preset face template comprises:
determining a face angle of the face image;
determining a preset face template corresponding to the face angle;
taking a region with definition meeting a preset requirement in the face image as a clearest region, and extracting feature points of the clearest region to obtain a first feature point set;
extracting the peripheral outline of the face image to obtain a first outline;
matching the first contour with a second contour of the preset face template, and matching the first feature point set with the preset face template;
when the first contour is successfully matched with the second contour of the preset face template and the first feature point set is successfully matched with the preset face template, the matching is confirmed to be successful; and confirming that the matching fails when the first contour fails to be matched with the second contour of the preset face template, or when the first feature point set fails to be matched with the preset face template.
7. The method of claim 6, further comprising;
acquiring the vibration frequency of the mobile terminal;
after the time when the environmental parameter satisfies the preset condition, the method further comprises:
when the vibration frequency is less than a preset vibration frequency, executing the step of starting the face recognition mode;
and when the vibration frequency is greater than or equal to the preset vibration frequency, executing the step of not starting the face recognition mode.
8. The method of claim 6, further comprising:
determining the distance between the face and the mobile terminal;
after the time when the environmental parameter satisfies the preset condition, the method further comprises:
when the distance is smaller than a preset distance threshold value, executing the step of not starting the face recognition mode;
and when the distance is greater than or equal to the preset distance threshold, executing the step of starting the face recognition mode.
9. The method of claim 6, further comprising:
acquiring the current electric quantity of the mobile terminal;
after the time when the environmental parameter satisfies the preset condition, the method further comprises:
when the current electric quantity is larger than or equal to a preset electric quantity threshold value, executing the step of starting the face recognition mode;
and when the current electric quantity is smaller than the preset electric quantity threshold value, executing the step of not starting the face recognition mode.
10. An unlock control device, comprising:
a first acquisition unit for acquiring an environmental parameter;
the detection unit is used for detecting whether the environmental parameters meet preset conditions or not;
the execution unit is used for not starting a face recognition mode when the environmental parameters do not meet the preset conditions; when the environmental parameters meet the preset conditions, starting the face recognition mode, acquiring shooting parameters corresponding to the environmental parameters, and shooting according to the shooting parameters to obtain a face image;
the apparatus is further specifically configured to: matching the face image with a preset face template, and executing an unlocking operation when the face image is successfully matched with the preset face template;
in the aspect of matching the face image with a preset face template, the device is specifically configured to:
determining a face angle of the face image;
determining a preset face template corresponding to the face angle;
taking a region with definition meeting a preset requirement in the face image as a clearest region, and extracting feature points of the clearest region to obtain a first feature point set;
extracting the peripheral outline of the face image to obtain a first outline;
matching the first contour with a second contour of the preset face template, and matching the first feature point set with the preset face template;
when the first contour is successfully matched with the second contour of the preset face template and the first feature point set is successfully matched with the preset face template, the matching is confirmed to be successful; and confirming that the matching fails when the first contour fails to be matched with the second contour of the preset face template, or when the first feature point set fails to be matched with the preset face template.
11. A mobile terminal, comprising: an application processor AP and a memory; and one or more programs stored in the memory and configured to be executed by the AP, the programs comprising instructions for the method of any of claims 6-9.
12. A computer-readable storage medium for storing a computer program, wherein the computer program causes a computer to perform the method according to any one of claims 6-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710891744.0A CN107613550B (en) | 2017-09-27 | 2017-09-27 | Unlocking control method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710891744.0A CN107613550B (en) | 2017-09-27 | 2017-09-27 | Unlocking control method and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107613550A CN107613550A (en) | 2018-01-19 |
CN107613550B true CN107613550B (en) | 2020-12-29 |
Family
ID=61058872
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710891744.0A Active CN107613550B (en) | 2017-09-27 | 2017-09-27 | Unlocking control method and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107613550B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319837A (en) * | 2018-02-13 | 2018-07-24 | 广东欧珀移动通信有限公司 | Electronic equipment, face template input method and Related product |
CN108446665B (en) * | 2018-03-30 | 2020-04-17 | 维沃移动通信有限公司 | Face recognition method and mobile terminal |
CN108810276B (en) * | 2018-06-08 | 2021-03-30 | 维沃移动通信(深圳)有限公司 | Face recognition method and mobile terminal |
CN109816628B (en) * | 2018-12-20 | 2021-09-14 | 深圳云天励飞技术有限公司 | Face evaluation method and related product |
CN110334496A (en) * | 2019-06-27 | 2019-10-15 | Oppo广东移动通信有限公司 | A kind of solution lock control method, terminal and computer readable storage medium |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8471889B1 (en) * | 2010-03-11 | 2013-06-25 | Sprint Communications Company L.P. | Adjusting an image for video conference display |
US9225701B2 (en) * | 2011-04-18 | 2015-12-29 | Intelmate Llc | Secure communication systems and methods |
US9659164B2 (en) * | 2011-08-02 | 2017-05-23 | Qualcomm Incorporated | Method and apparatus for using a multi-factor password or a dynamic password for enhanced security on a device |
US20130287256A1 (en) * | 2012-04-30 | 2013-10-31 | Telibrahma Convergent Communications Private Limited | Method and system for real time image recognition on a mobile device |
CN103761463B (en) * | 2014-01-13 | 2017-09-01 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN105335691A (en) * | 2014-08-14 | 2016-02-17 | 南京普爱射线影像设备有限公司 | Smiling face identification and encouragement system |
CN104616438B (en) * | 2015-03-02 | 2016-09-07 | 重庆市科学技术研究院 | A kind of motion detection method of yawning for fatigue driving detection |
CN104809375A (en) * | 2015-04-15 | 2015-07-29 | 广东欧珀移动通信有限公司 | Mobile terminal unlocking method and device |
CN104978582B (en) * | 2015-05-15 | 2018-01-30 | 苏州大学 | Shelter target recognition methods based on profile angle of chord feature |
CN105653922B (en) * | 2015-12-30 | 2021-03-19 | 联想(北京)有限公司 | Biometric authentication method and electronic device |
CN105740763A (en) * | 2016-01-21 | 2016-07-06 | 珠海格力电器股份有限公司 | Identity recognition method and device |
CN106200878A (en) * | 2016-07-19 | 2016-12-07 | 深圳市万普拉斯科技有限公司 | Fingerprint control method, device and mobile terminal |
CN106326867B (en) * | 2016-08-26 | 2019-06-07 | 维沃移动通信有限公司 | A kind of method and mobile terminal of recognition of face |
CN106991377B (en) * | 2017-03-09 | 2020-06-05 | Oppo广东移动通信有限公司 | Face recognition method, face recognition device and electronic device combined with depth information |
CN107122644B (en) * | 2017-04-12 | 2020-01-10 | Oppo广东移动通信有限公司 | Switching method of biological password identification mode and mobile terminal |
CN107465809B (en) * | 2017-07-03 | 2020-12-04 | Oppo广东移动通信有限公司 | Verification method and terminal |
-
2017
- 2017-09-27 CN CN201710891744.0A patent/CN107613550B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN107613550A (en) | 2018-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107862265B (en) | Image processing method and related product | |
CN107679482B (en) | Unlocking control method and related product | |
CN107609514B (en) | Face recognition method and related product | |
CN107480496B (en) | Unlocking control method and related product | |
CN107292285B (en) | Iris living body detection method and related product | |
CN107657218B (en) | Face recognition method and related product | |
CN107590461B (en) | Face recognition method and related product | |
CN107613550B (en) | Unlocking control method and related product | |
CN107273510B (en) | Photo recommendation method and related product | |
CN107451446B (en) | Unlocking control method and related product | |
CN107679481B (en) | Unlocking control method and related product | |
CN107633499B (en) | Image processing method and related product | |
CN107197146B (en) | Image processing method and device, mobile terminal and computer readable storage medium | |
CN107403147B (en) | Iris living body detection method and related product | |
CN107506687B (en) | Living body detection method and related product | |
CN108093134B (en) | Anti-interference method of electronic equipment and related product | |
CN106558025B (en) | Picture processing method and device | |
CN107463818B (en) | Unlocking control method and related product | |
CN107423699B (en) | Biopsy method and Related product | |
CN107644219B (en) | Face registration method and related product | |
CN107480488B (en) | Unlocking control method and related product | |
CN107205116B (en) | Image selection method, mobile terminal, image selection device, and computer-readable storage medium | |
CN107451454B (en) | Unlocking control method and related product | |
CN107506708B (en) | Unlocking control method and related product | |
CN107633235B (en) | Unlocking control method and related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |