CN112446251A - Image processing method and related device - Google Patents
Image processing method and related device Download PDFInfo
- Publication number
- CN112446251A CN112446251A CN201910816325.XA CN201910816325A CN112446251A CN 112446251 A CN112446251 A CN 112446251A CN 201910816325 A CN201910816325 A CN 201910816325A CN 112446251 A CN112446251 A CN 112446251A
- Authority
- CN
- China
- Prior art keywords
- image
- portrait
- distance
- camera
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000012545 processing Methods 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 14
- 230000015654 memory Effects 0.000 claims description 14
- 238000004891 communication Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 5
- 230000009286 beneficial effect Effects 0.000 abstract description 9
- 230000006870 function Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- User Interface Of Digital Computer (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the application discloses an image processing method and a related device, which are applied to electronic equipment, wherein the electronic equipment comprises a camera, and the method comprises the following steps: acquiring at least one first image through the camera; acquiring a portrait area in the at least one first image; adjusting shooting parameters of the camera according to the portrait area; acquiring a second image according to the shooting parameters; and performing gesture recognition on the second image. The method and the device are beneficial to improving the quality of the collected images of the electronic equipment, and further improve the accuracy of image recognition.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and a related apparatus.
Background
With the development of society, the identification technologies of mobile phones are becoming more abundant, such as face identification, gesture identification, pupil identification, and the like. The gesture recognition may be from the movement of various parts of the human body, but generally refers to the movement of the face and hands. The user can use simple gestures to control or interact with the electronic equipment, but when the hand moves rapidly or the electronic equipment cannot focus in time, the acquired image is blurred, so that the accuracy rate in gesture recognition is low, and correct operation cannot be performed on the electronic equipment in time.
Disclosure of Invention
The embodiment of the application provides an image processing method and a related device, which are beneficial to improving the quality of images acquired by electronic equipment and further improving the accuracy of image identification.
In a first aspect, an embodiment of the present application provides an image processing method, which is applied to an electronic device, where the electronic device includes a camera, and the method includes:
acquiring at least one first image through the camera;
acquiring a portrait area in the at least one first image;
adjusting shooting parameters of the camera according to the portrait area;
acquiring a second image according to the shooting parameters;
and performing gesture recognition on the second image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which is applied to an electronic device including a camera, and includes an acquisition unit, an adjustment unit, and an identification unit,
the acquisition unit is used for acquiring at least one first image through the camera;
the acquisition unit is used for acquiring a portrait area in the at least one first image;
the adjusting unit is used for adjusting shooting parameters of the camera according to the portrait area;
the acquisition unit is also used for acquiring a second image according to the shooting parameters;
the recognition unit is used for performing gesture recognition on the second image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in any one of the methods of the first aspect of this application, and the computer includes an electronic device.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package, the computer comprising an electronic device.
It can be seen that, in the embodiment of the present application, the electronic device firstly acquires at least one first image through the camera, secondly acquires a portrait area in the at least one first image, thirdly adjusts shooting parameters of the camera according to the portrait area, secondly acquires a second image according to the shooting parameters, and finally performs gesture recognition on the second image. Therefore, the electronic equipment can further identify the image through the image acquired for the first time to obtain the current portrait area, further detect the current portrait area, adjust the shooting parameters in time according to the portrait area, and determine that the image obtained after the shooting parameters are adjusted is clear, thereby avoiding the occurrence of the situation that the unclear or fuzzy image is acquired by subsequent shooting, saving the generation of invalid pictures or fuzzy pictures, being beneficial to improving the efficiency of image processing of the electronic equipment, and saving the storage space in the electronic equipment.
Drawings
Reference will now be made in brief to the accompanying drawings, to which embodiments of the present application relate.
FIG. 1 is a schematic diagram of a gesture recognition device;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a flow chart of an image processing method disclosed in an embodiment of the present application;
FIG. 4 is a flow chart illustrating an image processing method disclosed in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus disclosed in an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a gesture recognition device provided in an embodiment of the present application, where the gesture recognition device 100 includes: the system comprises a control chip 101, a depth camera 102, a distance sensor 103 and a 3D portrait recognition device 104, wherein the control chip 101 is connected with and controls the depth camera 102, the distance sensor 103 and the 3D portrait recognition device 104.
The control chip 101 is a control center of the gesture recognition apparatus 100, and is configured to receive information and issue an operation instruction to the depth camera 102, the distance sensor 103, and the 3D human image recognition device 104 through the information.
The Depth camera 102 is a novel stereoscopic vision sensor and a three-dimensional Depth perception module, and can acquire a Depth and an RGB video stream with high resolution, high precision and low time delay in real time, and generate a 3D image in real time, for real-time target recognition, motion capture or scene perception of a three-dimensional image.
The 3D portrait recognition apparatus 104 is a movable apparatus, can move up and down and adjust a photographing angle, and has a camera mounted therein.
The mobile terminal according to the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and so on. For convenience of description, the above-mentioned devices are collectively referred to as a mobile terminal.
The following describes embodiments of the present application in detail.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image processing method provided in an embodiment of the present application, and is applied to an electronic device, where the electronic device includes a camera, and as shown in fig. 2, the image processing method includes:
s201, the electronic equipment collects at least one first image through the camera.
The acquiring of the first image may be performed by a single click to obtain a single first image or multiple first images shot by multiple clicks according to a continuous shooting mode in the camera, and the acquiring is not limited herein.
S202, the electronic equipment acquires a portrait area in the at least one first image.
The human image region may include a human face region and a contour region of a whole body of a person in each image.
S203, the electronic equipment adjusts shooting parameters of the camera according to the portrait area.
The shooting parameters may include, but are not limited to, color temperature, color saturation, contrast, white balance, good focus, and the like, and are not limited herein.
And S204, the electronic equipment acquires a second image according to the shooting parameters.
The second image may be a hand image of the photographed object, or an image in a preset hand region, and the image may include other images besides the hand image, which is not limited herein.
S205, the electronic equipment performs gesture recognition on the second image.
Optionally, after the target gesture in the second image is determined, a gesture image pre-stored by a user in the electronic device is acquired; comparing the target gesture with the gesture image to obtain a target instruction corresponding to the gesture image; carrying out subsequent operation according to the target instruction;
or after the target gesture in the second image is determined, acquiring a gesture image pre-stored by a user in the electronic equipment; comparing the target gesture with the gesture images, and if no corresponding gesture image exists, acquiring the gesture images in a cloud, wherein each gesture image corresponds to one instruction; obtaining a target instruction corresponding to the gesture image; and carrying out subsequent operation according to the target instruction.
It can be seen that, in the embodiment of the present application, the electronic device firstly acquires at least one first image through the camera, secondly acquires a portrait area in the at least one first image, thirdly adjusts shooting parameters of the camera according to the portrait area, secondly acquires a second image according to the shooting parameters, and finally performs gesture recognition on the second image. Therefore, the electronic equipment can further identify the image through the image acquired for the first time to obtain the current portrait area, further detect the current portrait area, adjust the shooting parameters in time according to the portrait area, and determine that the image obtained after the shooting parameters are adjusted is clear, thereby avoiding the occurrence of the situation that the unclear or fuzzy image is acquired by subsequent shooting, saving the generation of invalid pictures or fuzzy pictures, being beneficial to improving the efficiency of image processing of the electronic equipment, and saving the storage space in the electronic equipment.
In one possible example, the adjusting the shooting parameters of the camera according to the portrait area includes: determining a portrait distance according to the portrait area, wherein the portrait distance is the distance between the camera and a shot object; and adjusting the shooting parameters of the camera according to the portrait distance.
For example, if the distance between the user b and the camera is acquired as 5CM, but the current camera focal length is acquired as 50MM, the focal length corresponding to the current 5CM distance is acquired as 35MM, the 50MM is adjusted to 35MM, and the user b is subjected to secondary shooting according to the parameters of the 35 MM.
Therefore, in the example, the camera parameters of the camera can be adjusted through the first image acquired for the first time by the electronic device, so that the acquired image is clear when secondary acquisition is performed, the situation that gesture recognition fails or gesture recognition errors occur due to fuzzy secondary acquisition of the image is avoided, and the accuracy and the intelligence of adjusting the camera parameters in the image processing process of the electronic device are improved.
In one possible example, when the at least one first image is a single image, the determining the portrait distance according to the portrait area includes: identifying a reference center point of a portrait area in the single image; acquiring a current reference geometric center of the camera; constructing a rectangular coordinate system by taking the reference central point as an origin coordinate to obtain a coordinate parameter of the reference geometric center; and substituting the coordinate parameter center into a preset formula, and calculating to obtain the portrait distance.
Wherein the preset formula isWherein, L is the distance of the portrait, and (X, Y, Z) are the coordinate parameters of the reference geometric center.
The reference center point of the portrait area is the intersection point of diagonal lines of the area, and the intersection point is the reference center point.
Wherein, the reference center point is the optical center (projection center) of the camera, the Xc axis and the Yc axis are parallel to the x axis and the y axis of the imaging plane coordinate system, the Zc axis is the optical axis of the camera and is vertical to the image plane. The intersection of the optical axis and the image plane is the principal point O1 of the image, and the rectangular coordinate system consisting of the point O and the Xc, Yc, Zc axes is called the camera coordinate system.
Therefore, in this example, the electronic device can further obtain the coordinates of the current camera according to the established coordinate system, and the distance of the current portrait can be accurately calculated through the preset formula, so that the error of the distance of the obtained portrait is reduced, and the accuracy of calculating the portrait distance in image processing is improved.
In one possible example, when the at least one first image is a plurality of images, the determining the portrait distance according to the portrait area includes: respectively calculating at least one second distance from each shot object to a geometric center in the camera; when the at least one second distance does not exceed a preset threshold value, calculating the average value of the at least one second distance, wherein the average value is the portrait distance.
The geometric center is positioned as the central point of a view-finding picture of the current camera, namely, the intersection point of four corners of the current picture after connecting lines of opposite corners is formed, and the intersection point is the geometric center of the camera.
The preset threshold may be set by a factory manufacturer or a threshold formed by collecting the shooting habits of the user, and is not limited herein.
Optionally, an image center point of each object to be photographed is obtained, and at least one second distance from the center point of each object to be photographed to the geometric center in the camera is calculated respectively.
For example, the shot object is a user A, and a plurality of pictures of the user A are shot, wherein the plurality of pictures are pictures A, pictures B and pictures C; respectively calculating the distance L1 between the picture A and the geometric center of the camera, the distance L2 between the picture B and the geometric center of the camera, and the distance L3 between the picture C and the geometric center of the camera; l1 was 5CM, L2 was 5.2CM, L3 was 4.9 CM; if the distance does not exceed the preset threshold value of 6CM, the current portrait distance is 5CM through average value calculation.
Therefore, in the example, the electronic device calculates the distance between each shot object and the geometric center of the camera, further calculates the average distance to obtain the portrait distance, can reduce error calculation of the portrait distance, avoids the situation of miscalculation of the portrait distance, and is beneficial to the accuracy of obtaining the portrait distance in the image processing process of the electronic device.
In one possible example, the adjusting the shooting parameters of the camera according to the portrait distance includes: and inquiring a preset database to obtain the shooting parameters matched with the portrait distance in the preset database, wherein the preset database comprises the mapping relation between the portrait distance and the shooting parameters.
The mapping relationship may be one-to-one, one-to-many, or many-to-many, and is not limited herein.
For example, the corresponding relationship between the portrait distance and the plurality of different shooting parameters is shown in table one:
distance of portrait | Color temperature | Focal length |
5cm | 3500K | 35mm |
10cm | 4000K | 50mm |
15cm | 4500K | 70mm |
。。。 | 。。。 | 。。。 |
Watch 1
The preset database can be a cloud image database or a database on a specific shooting APP, and is not limited uniquely here.
Therefore, in the example, the electronic device queries the preset database according to the portrait distance, and then obtains the corresponding shooting parameters, so that the situation of error adjustment is avoided, that is, the shooting parameters are adjusted in a targeted manner, and the accuracy of image processing performed by the electronic device is improved.
In one possible example, the gesture recognizing the second image includes: marking a first gesture area in the second image through a preset gesture frame; receiving a determination instruction for identifying the first gesture area, wherein the determination instruction is used for determining the area range of the first gesture; identifying the first gesture area according to the determination instruction.
The determining instruction can be a determining interface displayed on a display screen of the electronic device, and the user clicks the determining interface to send the determining instruction.
Optionally, after the preset gesture box identifies the gesture of the electronic device, automatically selecting a gesture area; after the first gesture area is marked, the gesture box does not disappear, and at the moment, the user can change the gesture box, such as enlarging, reducing and the like, and after a target area is obtained, a determination instruction is sent to the electronic equipment; and after the electronic equipment receives and identifies the determination instruction, identifying the gesture image of the target area.
Therefore, in the example, the electronic device can perform gesture recognition more specifically according to the steps of gesture region recognition, gesture region adjustment, gesture region determination and the like, so that the situations of region misrecognition, incomplete region recognition and gesture misrecognition are avoided, and the accuracy of the electronic device in image processing is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of an image processing method provided in an embodiment of the present application, and the image processing method is applied to an electronic device, where the electronic device includes a camera, and as shown in the diagram, the image processing method includes:
s301, the electronic equipment collects at least one first image through the camera.
S302, the electronic equipment acquires a portrait area in the at least one first image.
And S303, the electronic equipment respectively calculates at least one second distance from each shot object to the geometric center in the camera.
S304, when the at least one second distance does not exceed a preset threshold value, the electronic equipment calculates an average value of the at least one second distance, wherein the average value is a portrait distance.
S305, the electronic equipment adjusts shooting parameters of the camera according to the image distance.
S306, the electronic equipment acquires a second image according to the shooting parameters.
S307, the electronic equipment performs gesture recognition on the second image.
It can be seen that, in the embodiment of the present application, the electronic device firstly acquires at least one first image through the camera, secondly acquires a portrait area in the at least one first image, thirdly adjusts shooting parameters of the camera according to the portrait area, secondly acquires a second image according to the shooting parameters, and finally performs gesture recognition on the second image. Therefore, the electronic equipment can further identify the image through the image acquired for the first time to obtain the current portrait area, further detect the current portrait area, adjust the shooting parameters in time according to the portrait area, and determine that the image obtained after the shooting parameters are adjusted is clear, thereby avoiding the occurrence of the situation that the unclear or fuzzy image is acquired by subsequent shooting, saving the generation of invalid pictures or fuzzy pictures, being beneficial to improving the efficiency of image processing of the electronic equipment, and saving the storage space in the electronic equipment.
In addition, the electronic equipment queries the preset database according to the portrait distance, so that corresponding shooting parameters can be obtained, the situation of error adjustment is avoided, namely the shooting parameters are adjusted in a targeted manner, and the accuracy of image processing of the electronic equipment is improved.
Referring to fig. 4, fig. 4 is a schematic flowchart of an image processing method provided in an embodiment of the present application, and the image processing method is applied to an electronic device, where the electronic device includes a camera. As shown in the figure, the image processing method includes:
s401, the electronic equipment collects at least one first image through the camera.
S402, the electronic equipment acquires a portrait area in the at least one first image.
S403, the electronic equipment identifies the reference center point of the portrait area in the single image.
S404, the electronic equipment acquires the current reference geometric center of the camera.
S405, the electronic equipment takes the reference central point as an origin coordinate, and a rectangular coordinate system is constructed to obtain coordinate parameters of the reference geometric center.
And S406, substituting the coordinate parameter center into a preset formula by the electronic equipment, and calculating to obtain the portrait distance.
S407, the electronic equipment queries a preset database to obtain shooting parameters matched with the portrait distance in the preset database.
And S408, the electronic equipment acquires a second image according to the shooting parameters.
S409, the electronic equipment marks a first gesture area in the second image through a preset gesture frame.
S410, the electronic equipment receives a determination instruction for identifying the first gesture area.
S411, the electronic equipment identifies the first gesture area according to the determination instruction.
It can be seen that, in the embodiment of the present application, the electronic device firstly acquires at least one first image through the camera, secondly acquires a portrait area in the at least one first image, thirdly adjusts shooting parameters of the camera according to the portrait area, secondly acquires a second image according to the shooting parameters, and finally performs gesture recognition on the second image. Therefore, the electronic equipment can further identify the image through the image acquired for the first time to obtain the current portrait area, further detect the current portrait area, adjust the shooting parameters in time according to the portrait area, and determine that the image obtained after the shooting parameters are adjusted is clear, thereby avoiding the occurrence of the situation that the unclear or fuzzy image is acquired by subsequent shooting, saving the generation of invalid pictures or fuzzy pictures, being beneficial to improving the efficiency of image processing of the electronic equipment, and saving the storage space in the electronic equipment.
In addition, the electronic equipment can further obtain the coordinates of the current camera according to the established coordinate system, and can accurately calculate the distance of the current portrait through a preset formula, so that the error of the distance of the obtained portrait is reduced, and the accuracy of calculating the portrait distance in image processing is improved.
In addition, the electronic equipment can perform gesture recognition more specifically according to the steps of gesture area recognition, gesture area adjustment, gesture area determination and the like, so that the situations of area false recognition, incomplete area recognition and gesture false recognition are avoided, and the accuracy of the electronic equipment in image processing is improved.
Consistent with the embodiments shown in fig. 2, fig. 3, and fig. 4, fig. 5 is a schematic structural diagram of an electronic device 500 provided in an embodiment of the present application, and as shown in the figure, the electronic device 500 includes an application processor 510, a memory 520, a communication interface 530, and one or more programs 521, where the one or more programs 521 are stored in the memory 520 and configured to be executed by the application processor 510, and the one or more programs 521 include instructions for performing the following steps;
collecting at least one first image through the camera;
acquiring a portrait area in the at least one first image;
adjusting shooting parameters of the camera according to the portrait area;
acquiring a second image according to the shooting parameters;
and performing gesture recognition on the second image.
It can be seen that, in the embodiment of the present application, the electronic device firstly acquires at least one first image through the camera, secondly acquires a portrait area in the at least one first image, thirdly adjusts shooting parameters of the camera according to the portrait area, secondly acquires a second image according to the shooting parameters, and finally performs gesture recognition on the second image. Therefore, the electronic equipment can further identify the image through the image acquired for the first time to obtain the current portrait area, further detect the current portrait area, adjust the shooting parameters in time according to the portrait area, and determine that the image obtained after the shooting parameters are adjusted is clear, thereby avoiding the occurrence of the situation that the unclear or fuzzy image is acquired by subsequent shooting, saving the generation of invalid pictures or fuzzy pictures, being beneficial to improving the efficiency of image processing of the electronic equipment, and saving the storage space in the electronic equipment.
In one possible example, in terms of the adjusting of the shooting parameters of the camera according to the portrait area, the instructions in the program are specifically configured to: determining a portrait distance according to the portrait area, wherein the portrait distance is the distance between the camera and a shot object; and adjusting the shooting parameters of the camera according to the portrait distance.
In one possible example, when the at least one first image is a single image, the determining the portrait distance according to the portrait area may be specifically configured to perform the following operations: identifying a reference center point of a portrait area in the single image;
acquiring a current reference geometric center of the camera;
constructing a rectangular coordinate system by taking the reference central point as an origin coordinate to obtain a coordinate parameter of the reference geometric center;
and substituting the coordinate parameter center into a preset formula to calculate the portrait distance.
In one possible example, when the at least one first image is a plurality of images, the determining the portrait distance according to the portrait area, the instructions in the program are specifically configured to: respectively calculating at least one second distance from each shot object to a geometric center in the camera;
when the at least one second distance does not exceed a preset threshold value, calculating an average value of the at least one second distance, wherein the average value is a portrait distance.
In one possible example, in terms of the adjusting of the shooting parameters of the camera according to the portrait distance, the instructions in the program are specifically configured to: and inquiring a preset database to obtain the shooting parameters matched with the portrait distance in the preset database, wherein the preset database comprises the mapping relation between the portrait distance and the shooting parameters.
In one possible example, in the aspect of gesture recognition on the second image, the instructions in the program are specifically configured to: marking a first gesture area in the second image through a preset gesture frame;
receiving a determination instruction for identifying the first gesture area, wherein the determination instruction is used for determining the area range of the first gesture;
identifying the first gesture area according to the determination instruction.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
The following is an embodiment of the apparatus of the present invention, which is used to perform the method implemented by the embodiment of the method of the present invention. The image processing apparatus 600 shown in fig. 6 is applied to an electronic device, the electronic device includes a camera, the image processing apparatus 600 includes an acquisition unit 601, an acquisition unit 602, an adjustment unit 603, and an identification unit 604, wherein,
the acquisition unit 601 is configured to acquire at least one first image through the camera;
the acquiring unit 602 is configured to acquire a portrait area in the at least one first image;
the adjusting unit 603 is configured to adjust the shooting parameters of the camera according to the portrait area;
the acquisition unit 601 is further configured to acquire a second image according to the shooting parameters;
the recognition unit 604 is configured to perform gesture recognition on the second image.
It can be seen that, in the embodiment of the present application, the electronic device firstly acquires at least one first image through the camera, secondly acquires a portrait area in the at least one first image, thirdly adjusts shooting parameters of the camera according to the portrait area, secondly acquires a second image according to the shooting parameters, and finally performs gesture recognition on the second image. Therefore, the electronic equipment can further identify the image through the image acquired for the first time to obtain the current portrait area, further detect the current portrait area, adjust the shooting parameters in time according to the portrait area, and determine that the image obtained after the shooting parameters are adjusted is clear, thereby avoiding the occurrence of the situation that the unclear or fuzzy image is acquired by subsequent shooting, saving the generation of invalid pictures or fuzzy pictures, being beneficial to improving the efficiency of image processing of the electronic equipment, and saving the storage space in the electronic equipment.
In a possible example, in terms of the adjusting the shooting parameters of the camera according to the portrait area, the adjusting unit 603 is specifically configured to:
determining a portrait distance according to the portrait area, wherein the portrait distance is the distance between the camera and a shot object;
and adjusting the shooting parameters of the camera according to the portrait distance.
In a possible example, when the at least one first image is a single image, the adjusting unit 603 is specifically configured to, in terms of determining the portrait distance according to the portrait area: identifying a reference center point of a portrait area in the single image;
acquiring a current reference geometric center of the camera;
constructing a rectangular coordinate system by taking the reference central point as an origin coordinate to obtain a coordinate parameter of the reference geometric center;
and substituting the coordinate parameter center into a preset formula, and calculating to obtain the portrait distance.
In a possible example, when the at least one first image is a plurality of images, the adjusting unit 603 is specifically configured to, in terms of determining the portrait distance according to the portrait area: respectively calculating at least one second distance from each shot object to a geometric center in the camera;
when the at least one second distance does not exceed a preset threshold value, calculating an average value of the at least one second distance, wherein the average value is a portrait distance.
In a possible example, in terms of adjusting the shooting parameters of the camera according to the portrait distance, the adjusting unit 603 is specifically configured to: and inquiring a preset database to obtain the shooting parameters matched with the portrait distance in the preset database, wherein the preset database comprises the mapping relation between the portrait distance and the shooting parameters.
In one possible example, in terms of performing gesture recognition on the second image, the recognition unit 604 is specifically configured to: marking a first gesture area in the second image through a preset gesture frame;
receiving a determination instruction for identifying the first gesture area, wherein the determination instruction is used for determining the area range of the first gesture;
identifying the first gesture area according to the determination instruction.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. An image processing method is applied to an electronic device, the electronic device comprises a camera, and the method comprises the following steps:
acquiring at least one first image through the camera;
acquiring a portrait area in the at least one first image;
adjusting shooting parameters of the camera according to the portrait area;
acquiring a second image according to the shooting parameters;
and performing gesture recognition on the second image.
2. The method according to claim 1, wherein the adjusting the shooting parameters of the camera according to the portrait area comprises:
determining a portrait distance according to the portrait area, wherein the portrait distance is the distance between the camera and a shot object;
and adjusting the shooting parameters of the camera according to the portrait distance.
3. The method according to claim 1 or 2, wherein when the first image is a single image, the determining the portrait distance according to the portrait area comprises:
identifying a reference center point of a portrait area in the single image;
acquiring a current reference geometric center of the camera;
constructing a rectangular coordinate system by taking the reference central point as an origin coordinate to obtain a coordinate parameter of the reference geometric center;
and substituting the coordinate parameter center into a preset formula, and calculating to obtain the portrait distance.
4. The method according to claim 1 or 2, wherein when the first image is a plurality of images, the determining the portrait distance according to the portrait area comprises:
respectively calculating at least one second distance from each shot object to a geometric center in the camera;
when the at least one second distance does not exceed a preset threshold value, calculating an average value of the at least one second distance, wherein the average value is a portrait distance.
5. The method of claim 2, wherein the adjusting the shooting parameters of the camera according to the portrait distance comprises:
and inquiring a preset database to obtain the shooting parameters matched with the portrait distance in the preset database, wherein the preset database comprises the mapping relation between the portrait distance and the shooting parameters.
6. The method of claim 1, wherein the gesture recognizing the second image comprises:
marking a first gesture area in the second image through a preset gesture frame;
receiving a determination instruction for identifying the first gesture area, wherein the determination instruction is used for determining the area range of the first gesture;
identifying the first gesture area according to the determination instruction.
7. An image processing device is applied to electronic equipment, the electronic equipment comprises a camera, the image processing device comprises an acquisition unit, an adjustment unit and an identification unit, wherein,
the acquisition unit is used for acquiring at least one first image through the camera;
the acquisition unit is used for acquiring a portrait area in the at least one first image;
the adjusting unit is used for adjusting shooting parameters of the camera according to the portrait area;
the acquisition unit is also used for acquiring a second image according to the shooting parameters;
the recognition unit is used for performing gesture recognition on the second image.
8. The image processing apparatus according to claim 7, wherein, in said adjusting the shooting parameters of the camera according to the portrait area, the adjusting unit is specifically configured to:
determining a portrait distance according to the portrait area, wherein the portrait distance is the distance between the camera and a shot object;
and adjusting the shooting parameters of the camera according to the portrait distance.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-6.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910816325.XA CN112446251A (en) | 2019-08-30 | 2019-08-30 | Image processing method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910816325.XA CN112446251A (en) | 2019-08-30 | 2019-08-30 | Image processing method and related device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112446251A true CN112446251A (en) | 2021-03-05 |
Family
ID=74734099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910816325.XA Pending CN112446251A (en) | 2019-08-30 | 2019-08-30 | Image processing method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112446251A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113992904A (en) * | 2021-09-22 | 2022-01-28 | 联想(北京)有限公司 | Information processing method and device, electronic equipment and readable storage medium |
CN115273155A (en) * | 2022-09-28 | 2022-11-01 | 成都大熊猫繁育研究基地 | Method and system for identifying pandas through portable equipment |
CN116301362A (en) * | 2023-02-27 | 2023-06-23 | 荣耀终端有限公司 | Image processing method, electronic device and storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853071A (en) * | 2010-05-13 | 2010-10-06 | 重庆大学 | Gesture identification method and system based on visual sense |
CN103248824A (en) * | 2013-04-27 | 2013-08-14 | 天脉聚源(北京)传媒科技有限公司 | Method and device for determining shooting angle of camera and picture pick-up system |
CN103324285A (en) * | 2013-05-24 | 2013-09-25 | 深圳Tcl新技术有限公司 | Camera adjusting method and terminal based on gesture system |
EP2816404A1 (en) * | 2013-03-28 | 2014-12-24 | Huawei Technologies Co., Ltd | Quick automatic focusing method and image acquisition device |
US20150116353A1 (en) * | 2013-10-30 | 2015-04-30 | Morpho, Inc. | Image processing device, image processing method and recording medium |
CN106648063A (en) * | 2016-10-19 | 2017-05-10 | 北京小米移动软件有限公司 | Gesture recognition method and device |
US20170169570A1 (en) * | 2015-12-09 | 2017-06-15 | Adobe Systems Incorporated | Image Classification Based On Camera-to-Object Distance |
CN107680128A (en) * | 2017-10-31 | 2018-02-09 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN107835359A (en) * | 2017-10-25 | 2018-03-23 | 捷开通讯(深圳)有限公司 | Triggering method of taking pictures, mobile terminal and the storage device of a kind of mobile terminal |
CN108446025A (en) * | 2018-03-21 | 2018-08-24 | 广东欧珀移动通信有限公司 | Filming control method and Related product |
CN109413326A (en) * | 2018-09-18 | 2019-03-01 | Oppo(重庆)智能科技有限公司 | Camera control method and Related product |
CN110008818A (en) * | 2019-01-29 | 2019-07-12 | 北京奇艺世纪科技有限公司 | A kind of gesture identification method, device and computer readable storage medium |
-
2019
- 2019-08-30 CN CN201910816325.XA patent/CN112446251A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853071A (en) * | 2010-05-13 | 2010-10-06 | 重庆大学 | Gesture identification method and system based on visual sense |
EP2816404A1 (en) * | 2013-03-28 | 2014-12-24 | Huawei Technologies Co., Ltd | Quick automatic focusing method and image acquisition device |
CN103248824A (en) * | 2013-04-27 | 2013-08-14 | 天脉聚源(北京)传媒科技有限公司 | Method and device for determining shooting angle of camera and picture pick-up system |
CN103324285A (en) * | 2013-05-24 | 2013-09-25 | 深圳Tcl新技术有限公司 | Camera adjusting method and terminal based on gesture system |
US20150116353A1 (en) * | 2013-10-30 | 2015-04-30 | Morpho, Inc. | Image processing device, image processing method and recording medium |
US20170169570A1 (en) * | 2015-12-09 | 2017-06-15 | Adobe Systems Incorporated | Image Classification Based On Camera-to-Object Distance |
CN106648063A (en) * | 2016-10-19 | 2017-05-10 | 北京小米移动软件有限公司 | Gesture recognition method and device |
CN107835359A (en) * | 2017-10-25 | 2018-03-23 | 捷开通讯(深圳)有限公司 | Triggering method of taking pictures, mobile terminal and the storage device of a kind of mobile terminal |
CN107680128A (en) * | 2017-10-31 | 2018-02-09 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN108446025A (en) * | 2018-03-21 | 2018-08-24 | 广东欧珀移动通信有限公司 | Filming control method and Related product |
CN109413326A (en) * | 2018-09-18 | 2019-03-01 | Oppo(重庆)智能科技有限公司 | Camera control method and Related product |
CN110008818A (en) * | 2019-01-29 | 2019-07-12 | 北京奇艺世纪科技有限公司 | A kind of gesture identification method, device and computer readable storage medium |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113992904A (en) * | 2021-09-22 | 2022-01-28 | 联想(北京)有限公司 | Information processing method and device, electronic equipment and readable storage medium |
CN115273155A (en) * | 2022-09-28 | 2022-11-01 | 成都大熊猫繁育研究基地 | Method and system for identifying pandas through portable equipment |
CN115273155B (en) * | 2022-09-28 | 2022-12-09 | 成都大熊猫繁育研究基地 | Method and system for identifying pandas through portable equipment |
CN116301362A (en) * | 2023-02-27 | 2023-06-23 | 荣耀终端有限公司 | Image processing method, electronic device and storage medium |
CN116301362B (en) * | 2023-02-27 | 2024-04-05 | 荣耀终端有限公司 | Image processing method, electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106251334B (en) | A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system | |
US10306165B2 (en) | Image generating method and dual-lens device | |
US9235897B2 (en) | Stereoscopic image generating device and stereoscopic image generating method | |
CN111345029B (en) | Target tracking method and device, movable platform and storage medium | |
CN112446251A (en) | Image processing method and related device | |
US11523056B2 (en) | Panoramic photographing method and device, camera and mobile terminal | |
CN113301320B (en) | Image information processing method and device and electronic equipment | |
US9613404B2 (en) | Image processing method, image processing apparatus and electronic device | |
CN112207821B (en) | Target searching method of visual robot and robot | |
CN108510540A (en) | Stereoscopic vision video camera and its height acquisition methods | |
CN112200771A (en) | Height measuring method, device, equipment and medium | |
CN110136207B (en) | Fisheye camera calibration system, fisheye camera calibration method, fisheye camera calibration device, electronic equipment and storage medium | |
US20190156511A1 (en) | Region of interest image generating device | |
WO2021022989A1 (en) | Calibration parameter obtaining method and apparatus, processor, and electronic device | |
US11514608B2 (en) | Fisheye camera calibration system, method and electronic device | |
CN109257540B (en) | Photographing correction method of multi-photographing lens group and photographing device | |
CN106919246A (en) | The display methods and device of a kind of application interface | |
CN107436681A (en) | Automatically adjust the mobile terminal and its method of the display size of word | |
CN106934828A (en) | Depth image processing method and depth image processing system | |
CN114608521B (en) | Monocular ranging method and device, electronic equipment and storage medium | |
CN106323190B (en) | The depth measurement method of customizable depth measurement range and the system of depth image | |
CN115588052A (en) | Sight direction data acquisition method, device, equipment and storage medium | |
CN112446254A (en) | Face tracking method and related device | |
CN112686937B (en) | Depth image generation method, device and equipment | |
CN115834860A (en) | Background blurring method, apparatus, device, storage medium, and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |