Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a gesture recognition device provided in an embodiment of the present application, where the gesture recognition device 100 includes: the system comprises a control chip 101, a depth camera 102, a distance sensor 103 and a 3D portrait recognition device 104, wherein the control chip 101 is connected with and controls the depth camera 102, the distance sensor 103 and the 3D portrait recognition device 104.
The control chip 101 is a control center of the gesture recognition apparatus 100, and is configured to receive information and issue an operation instruction to the depth camera 102, the distance sensor 103, and the 3D human image recognition device 104 through the information.
The Depth camera 102 is a novel stereoscopic vision sensor and a three-dimensional Depth perception module, and can acquire a Depth and an RGB video stream with high resolution, high precision and low time delay in real time, and generate a 3D image in real time, for real-time target recognition, motion capture or scene perception of a three-dimensional image.
The 3D portrait recognition apparatus 104 is a movable apparatus, can move up and down and adjust a photographing angle, and has a camera mounted therein.
The mobile terminal according to the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and so on. For convenience of description, the above-mentioned devices are collectively referred to as a mobile terminal.
The following describes embodiments of the present application in detail.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image processing method provided in an embodiment of the present application, and is applied to an electronic device, where the electronic device includes a camera, and as shown in fig. 2, the image processing method includes:
s201, the electronic equipment collects at least one first image through the camera.
The acquiring of the first image may be performed by a single click to obtain a single first image or multiple first images shot by multiple clicks according to a continuous shooting mode in the camera, and the acquiring is not limited herein.
S202, the electronic equipment acquires a portrait area in the at least one first image.
The human image region may include a human face region and a contour region of a whole body of a person in each image.
S203, the electronic equipment adjusts shooting parameters of the camera according to the portrait area.
The shooting parameters may include, but are not limited to, color temperature, color saturation, contrast, white balance, good focus, and the like, and are not limited herein.
And S204, the electronic equipment acquires a second image according to the shooting parameters.
The second image may be a hand image of the photographed object, or an image in a preset hand region, and the image may include other images besides the hand image, which is not limited herein.
S205, the electronic equipment performs gesture recognition on the second image.
Optionally, after the target gesture in the second image is determined, a gesture image pre-stored by a user in the electronic device is acquired; comparing the target gesture with the gesture image to obtain a target instruction corresponding to the gesture image; carrying out subsequent operation according to the target instruction;
or after the target gesture in the second image is determined, acquiring a gesture image pre-stored by a user in the electronic equipment; comparing the target gesture with the gesture images, and if no corresponding gesture image exists, acquiring the gesture images in a cloud, wherein each gesture image corresponds to one instruction; obtaining a target instruction corresponding to the gesture image; and carrying out subsequent operation according to the target instruction.
It can be seen that, in the embodiment of the present application, the electronic device firstly acquires at least one first image through the camera, secondly acquires a portrait area in the at least one first image, thirdly adjusts shooting parameters of the camera according to the portrait area, secondly acquires a second image according to the shooting parameters, and finally performs gesture recognition on the second image. Therefore, the electronic equipment can further identify the image through the image acquired for the first time to obtain the current portrait area, further detect the current portrait area, adjust the shooting parameters in time according to the portrait area, and determine that the image obtained after the shooting parameters are adjusted is clear, thereby avoiding the occurrence of the situation that the unclear or fuzzy image is acquired by subsequent shooting, saving the generation of invalid pictures or fuzzy pictures, being beneficial to improving the efficiency of image processing of the electronic equipment, and saving the storage space in the electronic equipment.
In one possible example, the adjusting the shooting parameters of the camera according to the portrait area includes: determining a portrait distance according to the portrait area, wherein the portrait distance is the distance between the camera and a shot object; and adjusting the shooting parameters of the camera according to the portrait distance.
For example, if the distance between the user b and the camera is acquired as 5CM, but the current camera focal length is acquired as 50MM, the focal length corresponding to the current 5CM distance is acquired as 35MM, the 50MM is adjusted to 35MM, and the user b is subjected to secondary shooting according to the parameters of the 35 MM.
Therefore, in the example, the camera parameters of the camera can be adjusted through the first image acquired for the first time by the electronic device, so that the acquired image is clear when secondary acquisition is performed, the situation that gesture recognition fails or gesture recognition errors occur due to fuzzy secondary acquisition of the image is avoided, and the accuracy and the intelligence of adjusting the camera parameters in the image processing process of the electronic device are improved.
In one possible example, when the at least one first image is a single image, the determining the portrait distance according to the portrait area includes: identifying a reference center point of a portrait area in the single image; acquiring a current reference geometric center of the camera; constructing a rectangular coordinate system by taking the reference central point as an origin coordinate to obtain a coordinate parameter of the reference geometric center; and substituting the coordinate parameter center into a preset formula, and calculating to obtain the portrait distance.
Wherein the preset formula is
Wherein, L is the distance of the portrait, and (X, Y, Z) are the coordinate parameters of the reference geometric center.
The reference center point of the portrait area is the intersection point of diagonal lines of the area, and the intersection point is the reference center point.
Wherein, the reference center point is the optical center (projection center) of the camera, the Xc axis and the Yc axis are parallel to the x axis and the y axis of the imaging plane coordinate system, the Zc axis is the optical axis of the camera and is vertical to the image plane. The intersection of the optical axis and the image plane is the principal point O1 of the image, and the rectangular coordinate system consisting of the point O and the Xc, Yc, Zc axes is called the camera coordinate system.
Therefore, in this example, the electronic device can further obtain the coordinates of the current camera according to the established coordinate system, and the distance of the current portrait can be accurately calculated through the preset formula, so that the error of the distance of the obtained portrait is reduced, and the accuracy of calculating the portrait distance in image processing is improved.
In one possible example, when the at least one first image is a plurality of images, the determining the portrait distance according to the portrait area includes: respectively calculating at least one second distance from each shot object to a geometric center in the camera; when the at least one second distance does not exceed a preset threshold value, calculating the average value of the at least one second distance, wherein the average value is the portrait distance.
The geometric center is positioned as the central point of a view-finding picture of the current camera, namely, the intersection point of four corners of the current picture after connecting lines of opposite corners is formed, and the intersection point is the geometric center of the camera.
The preset threshold may be set by a factory manufacturer or a threshold formed by collecting the shooting habits of the user, and is not limited herein.
Optionally, an image center point of each object to be photographed is obtained, and at least one second distance from the center point of each object to be photographed to the geometric center in the camera is calculated respectively.
For example, the shot object is a user A, and a plurality of pictures of the user A are shot, wherein the plurality of pictures are pictures A, pictures B and pictures C; respectively calculating the distance L1 between the picture A and the geometric center of the camera, the distance L2 between the picture B and the geometric center of the camera, and the distance L3 between the picture C and the geometric center of the camera; l1 was 5CM, L2 was 5.2CM, L3 was 4.9 CM; if the distance does not exceed the preset threshold value of 6CM, the current portrait distance is 5CM through average value calculation.
Therefore, in the example, the electronic device calculates the distance between each shot object and the geometric center of the camera, further calculates the average distance to obtain the portrait distance, can reduce error calculation of the portrait distance, avoids the situation of miscalculation of the portrait distance, and is beneficial to the accuracy of obtaining the portrait distance in the image processing process of the electronic device.
In one possible example, the adjusting the shooting parameters of the camera according to the portrait distance includes: and inquiring a preset database to obtain the shooting parameters matched with the portrait distance in the preset database, wherein the preset database comprises the mapping relation between the portrait distance and the shooting parameters.
The mapping relationship may be one-to-one, one-to-many, or many-to-many, and is not limited herein.
For example, the corresponding relationship between the portrait distance and the plurality of different shooting parameters is shown in table one:
distance of portrait
|
Color temperature
|
Focal length
|
5cm
|
3500K
|
35mm
|
10cm
|
4000K
|
50mm
|
15cm
|
4500K
|
70mm
|
。。。
|
。。。
|
。。。 |
Watch 1
The preset database can be a cloud image database or a database on a specific shooting APP, and is not limited uniquely here.
Therefore, in the example, the electronic device queries the preset database according to the portrait distance, and then obtains the corresponding shooting parameters, so that the situation of error adjustment is avoided, that is, the shooting parameters are adjusted in a targeted manner, and the accuracy of image processing performed by the electronic device is improved.
In one possible example, the gesture recognizing the second image includes: marking a first gesture area in the second image through a preset gesture frame; receiving a determination instruction for identifying the first gesture area, wherein the determination instruction is used for determining the area range of the first gesture; identifying the first gesture area according to the determination instruction.
The determining instruction can be a determining interface displayed on a display screen of the electronic device, and the user clicks the determining interface to send the determining instruction.
Optionally, after the preset gesture box identifies the gesture of the electronic device, automatically selecting a gesture area; after the first gesture area is marked, the gesture box does not disappear, and at the moment, the user can change the gesture box, such as enlarging, reducing and the like, and after a target area is obtained, a determination instruction is sent to the electronic equipment; and after the electronic equipment receives and identifies the determination instruction, identifying the gesture image of the target area.
Therefore, in the example, the electronic device can perform gesture recognition more specifically according to the steps of gesture region recognition, gesture region adjustment, gesture region determination and the like, so that the situations of region misrecognition, incomplete region recognition and gesture misrecognition are avoided, and the accuracy of the electronic device in image processing is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of an image processing method provided in an embodiment of the present application, and the image processing method is applied to an electronic device, where the electronic device includes a camera, and as shown in the diagram, the image processing method includes:
s301, the electronic equipment collects at least one first image through the camera.
S302, the electronic equipment acquires a portrait area in the at least one first image.
And S303, the electronic equipment respectively calculates at least one second distance from each shot object to the geometric center in the camera.
S304, when the at least one second distance does not exceed a preset threshold value, the electronic equipment calculates an average value of the at least one second distance, wherein the average value is a portrait distance.
S305, the electronic equipment adjusts shooting parameters of the camera according to the image distance.
S306, the electronic equipment acquires a second image according to the shooting parameters.
S307, the electronic equipment performs gesture recognition on the second image.
It can be seen that, in the embodiment of the present application, the electronic device firstly acquires at least one first image through the camera, secondly acquires a portrait area in the at least one first image, thirdly adjusts shooting parameters of the camera according to the portrait area, secondly acquires a second image according to the shooting parameters, and finally performs gesture recognition on the second image. Therefore, the electronic equipment can further identify the image through the image acquired for the first time to obtain the current portrait area, further detect the current portrait area, adjust the shooting parameters in time according to the portrait area, and determine that the image obtained after the shooting parameters are adjusted is clear, thereby avoiding the occurrence of the situation that the unclear or fuzzy image is acquired by subsequent shooting, saving the generation of invalid pictures or fuzzy pictures, being beneficial to improving the efficiency of image processing of the electronic equipment, and saving the storage space in the electronic equipment.
In addition, the electronic equipment queries the preset database according to the portrait distance, so that corresponding shooting parameters can be obtained, the situation of error adjustment is avoided, namely the shooting parameters are adjusted in a targeted manner, and the accuracy of image processing of the electronic equipment is improved.
Referring to fig. 4, fig. 4 is a schematic flowchart of an image processing method provided in an embodiment of the present application, and the image processing method is applied to an electronic device, where the electronic device includes a camera. As shown in the figure, the image processing method includes:
s401, the electronic equipment collects at least one first image through the camera.
S402, the electronic equipment acquires a portrait area in the at least one first image.
S403, the electronic equipment identifies the reference center point of the portrait area in the single image.
S404, the electronic equipment acquires the current reference geometric center of the camera.
S405, the electronic equipment takes the reference central point as an origin coordinate, and a rectangular coordinate system is constructed to obtain coordinate parameters of the reference geometric center.
And S406, substituting the coordinate parameter center into a preset formula by the electronic equipment, and calculating to obtain the portrait distance.
S407, the electronic equipment queries a preset database to obtain shooting parameters matched with the portrait distance in the preset database.
And S408, the electronic equipment acquires a second image according to the shooting parameters.
S409, the electronic equipment marks a first gesture area in the second image through a preset gesture frame.
S410, the electronic equipment receives a determination instruction for identifying the first gesture area.
S411, the electronic equipment identifies the first gesture area according to the determination instruction.
It can be seen that, in the embodiment of the present application, the electronic device firstly acquires at least one first image through the camera, secondly acquires a portrait area in the at least one first image, thirdly adjusts shooting parameters of the camera according to the portrait area, secondly acquires a second image according to the shooting parameters, and finally performs gesture recognition on the second image. Therefore, the electronic equipment can further identify the image through the image acquired for the first time to obtain the current portrait area, further detect the current portrait area, adjust the shooting parameters in time according to the portrait area, and determine that the image obtained after the shooting parameters are adjusted is clear, thereby avoiding the occurrence of the situation that the unclear or fuzzy image is acquired by subsequent shooting, saving the generation of invalid pictures or fuzzy pictures, being beneficial to improving the efficiency of image processing of the electronic equipment, and saving the storage space in the electronic equipment.
In addition, the electronic equipment can further obtain the coordinates of the current camera according to the established coordinate system, and can accurately calculate the distance of the current portrait through a preset formula, so that the error of the distance of the obtained portrait is reduced, and the accuracy of calculating the portrait distance in image processing is improved.
In addition, the electronic equipment can perform gesture recognition more specifically according to the steps of gesture area recognition, gesture area adjustment, gesture area determination and the like, so that the situations of area false recognition, incomplete area recognition and gesture false recognition are avoided, and the accuracy of the electronic equipment in image processing is improved.
Consistent with the embodiments shown in fig. 2, fig. 3, and fig. 4, fig. 5 is a schematic structural diagram of an electronic device 500 provided in an embodiment of the present application, and as shown in the figure, the electronic device 500 includes an application processor 510, a memory 520, a communication interface 530, and one or more programs 521, where the one or more programs 521 are stored in the memory 520 and configured to be executed by the application processor 510, and the one or more programs 521 include instructions for performing the following steps;
collecting at least one first image through the camera;
acquiring a portrait area in the at least one first image;
adjusting shooting parameters of the camera according to the portrait area;
acquiring a second image according to the shooting parameters;
and performing gesture recognition on the second image.
It can be seen that, in the embodiment of the present application, the electronic device firstly acquires at least one first image through the camera, secondly acquires a portrait area in the at least one first image, thirdly adjusts shooting parameters of the camera according to the portrait area, secondly acquires a second image according to the shooting parameters, and finally performs gesture recognition on the second image. Therefore, the electronic equipment can further identify the image through the image acquired for the first time to obtain the current portrait area, further detect the current portrait area, adjust the shooting parameters in time according to the portrait area, and determine that the image obtained after the shooting parameters are adjusted is clear, thereby avoiding the occurrence of the situation that the unclear or fuzzy image is acquired by subsequent shooting, saving the generation of invalid pictures or fuzzy pictures, being beneficial to improving the efficiency of image processing of the electronic equipment, and saving the storage space in the electronic equipment.
In one possible example, in terms of the adjusting of the shooting parameters of the camera according to the portrait area, the instructions in the program are specifically configured to: determining a portrait distance according to the portrait area, wherein the portrait distance is the distance between the camera and a shot object; and adjusting the shooting parameters of the camera according to the portrait distance.
In one possible example, when the at least one first image is a single image, the determining the portrait distance according to the portrait area may be specifically configured to perform the following operations: identifying a reference center point of a portrait area in the single image;
acquiring a current reference geometric center of the camera;
constructing a rectangular coordinate system by taking the reference central point as an origin coordinate to obtain a coordinate parameter of the reference geometric center;
and substituting the coordinate parameter center into a preset formula to calculate the portrait distance.
In one possible example, when the at least one first image is a plurality of images, the determining the portrait distance according to the portrait area, the instructions in the program are specifically configured to: respectively calculating at least one second distance from each shot object to a geometric center in the camera;
when the at least one second distance does not exceed a preset threshold value, calculating an average value of the at least one second distance, wherein the average value is a portrait distance.
In one possible example, in terms of the adjusting of the shooting parameters of the camera according to the portrait distance, the instructions in the program are specifically configured to: and inquiring a preset database to obtain the shooting parameters matched with the portrait distance in the preset database, wherein the preset database comprises the mapping relation between the portrait distance and the shooting parameters.
In one possible example, in the aspect of gesture recognition on the second image, the instructions in the program are specifically configured to: marking a first gesture area in the second image through a preset gesture frame;
receiving a determination instruction for identifying the first gesture area, wherein the determination instruction is used for determining the area range of the first gesture;
identifying the first gesture area according to the determination instruction.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
The following is an embodiment of the apparatus of the present invention, which is used to perform the method implemented by the embodiment of the method of the present invention. The image processing apparatus 600 shown in fig. 6 is applied to an electronic device, the electronic device includes a camera, the image processing apparatus 600 includes an acquisition unit 601, an acquisition unit 602, an adjustment unit 603, and an identification unit 604, wherein,
the acquisition unit 601 is configured to acquire at least one first image through the camera;
the acquiring unit 602 is configured to acquire a portrait area in the at least one first image;
the adjusting unit 603 is configured to adjust the shooting parameters of the camera according to the portrait area;
the acquisition unit 601 is further configured to acquire a second image according to the shooting parameters;
the recognition unit 604 is configured to perform gesture recognition on the second image.
It can be seen that, in the embodiment of the present application, the electronic device firstly acquires at least one first image through the camera, secondly acquires a portrait area in the at least one first image, thirdly adjusts shooting parameters of the camera according to the portrait area, secondly acquires a second image according to the shooting parameters, and finally performs gesture recognition on the second image. Therefore, the electronic equipment can further identify the image through the image acquired for the first time to obtain the current portrait area, further detect the current portrait area, adjust the shooting parameters in time according to the portrait area, and determine that the image obtained after the shooting parameters are adjusted is clear, thereby avoiding the occurrence of the situation that the unclear or fuzzy image is acquired by subsequent shooting, saving the generation of invalid pictures or fuzzy pictures, being beneficial to improving the efficiency of image processing of the electronic equipment, and saving the storage space in the electronic equipment.
In a possible example, in terms of the adjusting the shooting parameters of the camera according to the portrait area, the adjusting unit 603 is specifically configured to:
determining a portrait distance according to the portrait area, wherein the portrait distance is the distance between the camera and a shot object;
and adjusting the shooting parameters of the camera according to the portrait distance.
In a possible example, when the at least one first image is a single image, the adjusting unit 603 is specifically configured to, in terms of determining the portrait distance according to the portrait area: identifying a reference center point of a portrait area in the single image;
acquiring a current reference geometric center of the camera;
constructing a rectangular coordinate system by taking the reference central point as an origin coordinate to obtain a coordinate parameter of the reference geometric center;
and substituting the coordinate parameter center into a preset formula, and calculating to obtain the portrait distance.
In a possible example, when the at least one first image is a plurality of images, the adjusting unit 603 is specifically configured to, in terms of determining the portrait distance according to the portrait area: respectively calculating at least one second distance from each shot object to a geometric center in the camera;
when the at least one second distance does not exceed a preset threshold value, calculating an average value of the at least one second distance, wherein the average value is a portrait distance.
In a possible example, in terms of adjusting the shooting parameters of the camera according to the portrait distance, the adjusting unit 603 is specifically configured to: and inquiring a preset database to obtain the shooting parameters matched with the portrait distance in the preset database, wherein the preset database comprises the mapping relation between the portrait distance and the shooting parameters.
In one possible example, in terms of performing gesture recognition on the second image, the recognition unit 604 is specifically configured to: marking a first gesture area in the second image through a preset gesture frame;
receiving a determination instruction for identifying the first gesture area, wherein the determination instruction is used for determining the area range of the first gesture;
identifying the first gesture area according to the determination instruction.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.