CN113724140B - Image processing method, electronic device, medium and system - Google Patents
Image processing method, electronic device, medium and system Download PDFInfo
- Publication number
- CN113724140B CN113724140B CN202010450719.0A CN202010450719A CN113724140B CN 113724140 B CN113724140 B CN 113724140B CN 202010450719 A CN202010450719 A CN 202010450719A CN 113724140 B CN113724140 B CN 113724140B
- Authority
- CN
- China
- Prior art keywords
- image
- processed
- deformation
- portrait
- electronic equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 42
- 238000012937 correction Methods 0.000 claims abstract description 159
- 238000000034 method Methods 0.000 claims abstract description 83
- 239000011159 matrix material Substances 0.000 claims description 72
- 241000282414 Homo sapiens Species 0.000 claims description 56
- 238000004422 calculation algorithm Methods 0.000 claims description 47
- 230000003287 optical effect Effects 0.000 claims description 37
- 230000015654 memory Effects 0.000 claims description 33
- 230000001815 facial effect Effects 0.000 claims description 26
- 238000006073 displacement reaction Methods 0.000 claims description 19
- 230000009467 reduction Effects 0.000 claims description 14
- 238000007781 pre-processing Methods 0.000 claims description 12
- 239000000284 extract Substances 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 22
- 230000008569 process Effects 0.000 description 25
- 238000004891 communication Methods 0.000 description 24
- 230000006870 function Effects 0.000 description 24
- 238000001514 detection method Methods 0.000 description 21
- 238000013135 deep learning Methods 0.000 description 19
- 238000003702 image correction Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 210000003128 head Anatomy 0.000 description 13
- 238000007726 management method Methods 0.000 description 11
- 210000001331 nose Anatomy 0.000 description 10
- 238000007500 overflow downdraw method Methods 0.000 description 10
- 230000004927 fusion Effects 0.000 description 9
- 238000010295 mobile communication Methods 0.000 description 8
- 210000005069 ears Anatomy 0.000 description 7
- 238000003062 neural network model Methods 0.000 description 7
- 238000005457 optimization Methods 0.000 description 7
- 210000001508 eye Anatomy 0.000 description 5
- 238000012805 post-processing Methods 0.000 description 5
- 210000000697 sensory organ Anatomy 0.000 description 5
- 241001465754 Metazoa Species 0.000 description 4
- 210000000746 body region Anatomy 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 210000000214 mouth Anatomy 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000001788 irregular Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000003416 augmentation Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000282472 Canis lupus familiaris Species 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000010171 animal model Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 235000012434 pretzels Nutrition 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The application relates to an image processing method, an electronic device, a medium and a system. The method comprises the following steps: the electronic equipment acquires an image to be processed; the electronic equipment determines that a target to be corrected, which needs to be subjected to deformation correction, exists in the image to be processed; the electronic equipment determines the deformation type of the target to be corrected; and the electronic equipment corrects the deformation of the image to be processed according to the determined deformation type. The method and the device can be applied to the technical field of image processing, and the deformation type of the target to be corrected in the image to be processed is judged, so that the corresponding correction strategy is adaptively matched, and the requirement of a user for correcting the deformation of the image in different scenes is met.
Description
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an image processing method, an electronic device, a medium, and a system.
Background
In recent years, with the development of terminal technology, various functions of electronic devices have been perfected, and consumer, entertainment, communication, and work modes have been significantly changed. Taking a mobile phone as an example, the photographing function of the mobile phone has higher use frequency in daily use of a user. For example, a user may want to take an image while encountering beautiful scenery and a building on the way to travel. For another example, the user wants to record these cheering moments by taking a picture through a cell phone when the user is in a friend party or family party. However, when a user uses a mobile phone to take a picture, the quality of the taken picture is often poor because the angle, distance, etc. of the picture are not well known. However, there is an increasing demand for photographing by users for self-photographing and the like, and users want to photograph images with good imaging effects.
Currently, in mobile phone photography, a person deforms due to deformation of a lens module, perspective projection, and the like during projection onto an imaging plane through a lens. For example, in fig. 1 a, if the number of persons to be photographed is large during self-photographing, the photographed object is at a large angle of view, and the portrait in the photographed image is deformed by stretching toward four corners of the edge (hereinafter, simply referred to as wide-angle deformation). In addition, when a photographed person is closer or farther from the lens, perspective distortion of the person image in the obtained image occurs as shown in fig. 1 (c) and (e). In fig. 1 (c), when the photographed person is closer to the lens, the person image in the photographed image may show deformation of a large nose, a long face, and small ears (even vanishing); in fig. 1 (e), when the photographed person is far from the lens, the person image in the photographed image becomes flat in the five sense organs.
In the prior art, only one of the deformations can be corrected independently, and the requirement that a user can shoot images with good imaging effect in different shooting scenes cannot be met.
Disclosure of Invention
The embodiment of the application provides an image processing method, electronic equipment, a medium and a system. According to the technical scheme, whether a target object (such as a person) exists in the acquired image is judged by analyzing the acquired image, and under the condition that the target object exists in the acquired image, the deformation type of the target object is determined based on the object distance information of the acquired image (namely the distance from a shooting object to a camera) and the position information of the target object in the image, and a corresponding correction method is adaptively matched based on the determined deformation type. Specifically, when the object distance information of the acquired image exceeds an object distance threshold value, determining that a target object at an image boundary in the image has wide-angle deformation, and determining that the target object at the center of the image in the image has perspective deformation; when the object distance information of the acquired image is within the object distance threshold value range, whether one target object or a plurality of target objects exist in the image, determining that perspective deformation exists in the target object positioned in the center of the image, and determining that wide-angle deformation exists in the target object positioned at the boundary of the image. Therefore, the corresponding correction strategy can be adaptively matched according to the deformation type of the image, so that the corrected image can reflect the real state of the shot object more truly and aesthetically, and the user experience is improved.
In a first aspect, an embodiment of the present application provides an image processing method, including:
The electronic equipment acquires an image to be processed; the electronic equipment determines that a target to be corrected, which needs to be subjected to deformation correction, exists in the image to be processed; the electronic equipment determines the deformation type of the target to be corrected; and the electronic equipment carries out deformation correction on the target to be corrected in the image to be processed according to the determined deformation type. Therefore, the corresponding correction strategy can be adaptively matched according to the deformation type of the target to be corrected in the image to be processed, so that the corrected image can reflect the real state of the shot object more truly and aesthetically, and the user experience is improved.
In a possible implementation of the first aspect, the method further includes: the electronic equipment determining that the target to be corrected, which needs to be subjected to deformation correction, exists in the image to be processed comprises the following steps:
Under the condition that the electronic equipment determines that the human image exists in the image to be processed and the object distance of the image to be processed is smaller than or equal to the object distance threshold, the electronic equipment determines that the human image existing in the image to be processed is a target to be corrected, wherein deformation correction is required; or under the condition that the electronic equipment determines that the human image exists in the image to be processed, the object distance of the image to be processed is larger than the object distance threshold value, and the existing human image is positioned at the first position of the image to be processed, the electronic equipment determines that the human image existing in the image to be processed is a target to be corrected, wherein deformation correction is required. Wherein the first location is in a border region of the image to be processed. It is understood that the object to be corrected may be a person, an animal, or an object with strong stereoscopic impression such as other objects.
In a possible implementation of the first aspect, the method further includes: the electronic device determining a deformation type of the target to be corrected includes: .
When the object distance of the image to be processed is smaller than or equal to the object distance threshold value and the portrait is at the second position of the image to be processed, the electronic equipment determines that the deformation type of the portrait is first deformation;
when the object distance of the image to be processed is smaller than or equal to the object distance threshold value and the portrait is at the first position of the image to be processed, the electronic equipment determines that the deformation type of the portrait is second deformation;
When the object distance of the image to be processed is larger than the object distance threshold value and the portrait is at the first position of the image to be processed, the electronic equipment determines that the deformation type of the portrait is second deformation;
and under the condition that the object distance of the image to be processed is larger than the object distance threshold value and the portrait is at the second position of the image to be processed, the electronic equipment determines that the deformation type of the portrait is the first deformation.
Wherein the second location is in a central region of the image to be processed. It is understood that the central region of the image to be processed may be a region other than the boundary region in the image to be processed.
In a possible implementation of the first aspect, the method further includes: the second position of the image to be processed is an area except the first position in the image to be processed, and the electronic equipment determines that the portrait is at the first position of the image to be processed by the following modes:
When the proportion of the number of pixels exceeding a set field angle in the pixel area corresponding to the face of the human figure to the total number of the face pixels of the human figure is larger than a proportion threshold value, the electronic equipment determines that the human figure is positioned at a first position of the image to be processed; or alternatively
When the proportion of the number of pixels with coordinates falling into the edge area of the image in the pixel area corresponding to the face of the human figure to the total number of the face pixels of the human figure is larger than a proportion threshold value, the electronic equipment determines that the human figure is located at a first position of the image to be processed.
For example, if the maximum angle of view of the image to be processed is 70 degrees, it can be considered that the image is at the boundary of the image when the proportion of the number of pixels corresponding to the face of the image exceeding the angle of view by 60 degrees to the total number of face pixels of the image is greater than 10%. In other embodiments of the application, the location information of the portrait in the image may also be determined in other ways.
In a possible implementation of the first aspect, the method further includes: the electronic equipment determines the deformation type of the portrait as first deformation; and
The electronic equipment carries out deformation correction on the portrait in the image to be processed according to the determined deformation type, and the method comprises the following steps:
The electronic equipment divides the portrait and the background in the image to be processed to obtain a portrait area and a background area;
the electronic equipment acquires facial feature point position information of a portrait in a portrait area;
The electronic equipment calculates the offset between the acquired facial feature point position information and the estimated facial feature point position information of the undeformed human image through an optical flow estimation algorithm based on the acquired facial feature point position information;
the electronic equipment obtains a portrait area after correcting the portrait in the portrait area based on the obtained facial feature point position information and the calculated offset;
and the electronic equipment fuses the corrected portrait area and the background area to obtain a corrected image corresponding to the image to be processed.
In some embodiments, in order to improve the accuracy of image correction, object distance information of the image to be processed may also be acquired, and the electronic device estimates the offset through an optical flow estimation algorithm based on the acquired facial feature point position information and the object distance information.
In a possible implementation of the first aspect, the method further includes: the electronic equipment determines the deformation type of the portrait as first deformation; and
The electronic device performing deformation correction on the image to be processed according to the determined deformation type comprises the following steps:
The electronic equipment divides the portrait and the background in the image to be processed to obtain a portrait area and a background area;
the electronic equipment acquires facial feature point position information of a portrait in a portrait area;
the electronic equipment establishes an undeformed 3-dimensional head model corresponding to the portrait in the portrait area based on the acquired facial feature point position information and the reference face;
The electronic equipment creates a virtual camera, acquires a 2-dimensional image of a 3-dimensional head model by adopting the virtual camera based on preset shooting parameters, and takes the 2-dimensional image as a human image area after correcting the human image in the human image area, wherein the preset shooting parameters comprise at least one of shooting distance and shooting angle;
and the electronic equipment fuses the corrected portrait area and the background area to obtain a corrected image corresponding to the image to be processed.
In some embodiments, to improve the accuracy of image correction, object distance information of the image to be processed may also be acquired, and the electronic device establishes an undeformed 3-dimensional head model corresponding to the person image in the person image area based on the acquired facial feature point position information, object distance information, and reference person face.
In a possible implementation of the first aspect, the method further includes: the electronic equipment determines that the deformation type of the portrait is second deformation; and
The electronic equipment carries out deformation correction on the portrait in the image to be processed according to the determined deformation type, and the method comprises the following steps:
The electronic equipment carries out deformation correction on the portrait in the image to be processed according to the determined deformation type, and the method comprises the following steps:
the electronic equipment extracts the position parameter and the color parameter of each pixel in the image to be processed respectively to obtain a first coordinate matrix and a color matrix;
The electronic equipment constructs a constraint equation corresponding to the first coordinate matrix, and calculates a displacement matrix through the constraint equation, wherein the displacement matrix is a matrix difference between a second coordinate matrix and the first coordinate matrix, and the second coordinate matrix is a matrix formed by position parameters of pixels in the corrected image;
the electronic equipment obtains a second coordinate matrix according to the first coordinate matrix and the displacement matrix;
The electronic device performs color filling on the second coordinate matrix based on the first coordinate matrix and the first color matrix to obtain a corrected image.
In a possible implementation of the first aspect, the method further includes: the electronic equipment carries out deformation correction on a target to be corrected in an image to be processed according to the determined deformation type, and the method comprises the following steps:
the method comprises the steps that electronic equipment pre-processes an image to be processed to obtain a pre-processed image corresponding to the image to be processed, wherein the pre-processing comprises noise reduction and/or optical distortion correction;
and the electronic equipment carries out deformation correction on the target to be corrected in the preprocessed image according to the determined deformation type.
In a possible implementation of the first aspect, the method further includes: the electronic equipment obtaining the image to be processed comprises the following steps:
The electronic equipment responds to shooting instructions of a user of the electronic equipment, and an image acquired by a camera of the electronic equipment is used as an image to be processed; or the electronic equipment acquires the image to be processed from the gallery of the electronic equipment.
In a second aspect, an embodiment of the present application provides an electronic device, including:
The first determining module is used for acquiring an image to be processed;
the second determining module is used for determining that a target to be corrected which needs to be subjected to deformation correction exists in the image to be processed;
A third determining module, configured to determine a deformation type of the target to be corrected;
And the correction module is used for carrying out deformation correction on the target to be corrected in the image to be processed according to the determined deformation type.
In a third aspect, embodiments of the present application provide a computer readable medium having stored thereon instructions which, when executed on a computer, cause the computer to perform the image processing method of the first aspect and any of the various possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a system comprising:
A memory for storing instructions for execution by one or more processors of the system, and
A processor, one of the processors of the system, for performing the image processing method of the first aspect and any of the various possible implementations of the first aspect.
Drawings
FIG. 1 (a) illustrates an image that has been subjected to wide-angle distortion, according to some embodiments of the application;
FIG. 1 (b) shows an image after correction of the image shown in FIG. 1 (a), according to some embodiments of the application;
FIG. 1 (c) shows an image that has been subjected to perspective distortion, according to some embodiments of the application;
FIG. 1 (d) shows a corrected image of the image shown in FIG. 1 (c), according to some embodiments of the application;
FIG. 1 (e) shows another image that has been subjected to perspective deformation, according to some embodiments of the application;
FIG. 2 illustrates a scene graph of a handset correcting a user's self-captured image, according to some embodiments of the application;
FIG. 3 (a) shows icons of a plurality of applications on a home screen of a handset, according to some embodiments of the application;
FIG. 3 (b) illustrates a view finding interface of a cell phone, according to some embodiments of the application;
FIG. 3 (c) illustrates an image taken by a cell phone that approximates the user's real state, according to some embodiments of the application;
FIG. 4 (a) shows icons of a plurality of applications on a home screen of a handset, according to some embodiments of the application;
FIG. 4 (b) shows a plurality of images included in a gallery of a cell phone, according to some embodiments of the application;
FIG. 4 (c) shows an image of a cell phone corrected for an image selected by a user in a gallery, according to some embodiments of the application;
FIG. 5 illustrates a block diagram of an image processing apparatus, according to some embodiments of the application;
FIG. 6 (a) illustrates a process of correcting wide-angle distortion present in a wide-angle distorted portrait by the wide-angle distortion correction unit illustrated in FIG. 5, according to some embodiments of the present application;
FIG. 6 (b) illustrates an image output after the wide-angle anamorphic representation shown in FIG. 1 (a) is area-divided and corrected by the correction method shown in FIG. 6 (a), in accordance with some embodiments of the application;
FIG. 7 (a) illustrates a process by which the perspective distortion correction unit shown in FIG. 5 corrects the perspective distortion present in the perspective distorted portrait, according to some embodiments of the present application;
FIG. 7 (b) illustrates another process by which the perspective distortion correction unit shown in FIG. 5 corrects the perspective distortion present in the perspective distorted portrait, according to some embodiments of the present application;
FIG. 8 is a block diagram illustrating a process for correcting an image captured by a mobile phone using the image processing method according to some embodiments of the present application;
FIG. 9 is a block diagram illustrating another process for correcting an image captured by a mobile phone using the image processing method according to some embodiments of the present application;
FIG. 10 is a block flow diagram of a mobile phone for correcting images selected by a user in a gallery of the mobile phone using the image processing method of the present application according to some embodiments of the present application;
FIG. 11 is a block diagram illustrating another process for correcting images selected by a user in a gallery of a mobile phone using an image processing method provided by the present application, according to some embodiments of the present application;
FIG. 12 illustrates a block diagram of an electronic device, according to some embodiments of the application;
FIG. 13 illustrates a block diagram of a mobile phone, according to some embodiments of the application;
fig. 14 illustrates a block diagram of a system on a chip, according to some embodiments of the application.
Detailed Description
Illustrative embodiments of the application include, but are not limited to, an image processing method, electronic device, medium, and system.
According to the technical scheme, whether a target object (such as a person) exists in the acquired image is judged by analyzing the acquired image, and under the condition that the target object exists in the acquired image, the deformation type of the target object is determined based on the object distance information of the acquired image (namely the distance from a shooting object to a camera) and the position information of the target object in the image, and a corresponding correction method is adaptively matched based on the determined deformation type. Specifically, when the object distance information of the acquired image exceeds an object distance threshold value, determining that a target object at an image boundary in the image has wide-angle deformation, and determining that the target object at the center of the image in the image has perspective deformation; when the object distance information of the acquired image is within the object distance threshold value range, whether one target object or a plurality of target objects exist in the image, determining that perspective deformation exists in the target object positioned in the center of the image, and determining that wide-angle deformation exists in the target object positioned at the boundary of the image. Therefore, the corresponding correction strategy can be adaptively matched according to the deformation type of the image, so that the corrected image can reflect the real state of the shot object more truly and aesthetically, and the user experience is improved.
Embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 2 illustrates a scene diagram of a mobile phone 100 correcting a user's self-photographed image using an image processing method according to some embodiments of the present application. As shown in fig. 2, a user takes a picture using the mobile phone 100, and the mobile phone 100 starts a front camera, which takes a picture of the user and the background in which the user is located. In the prior art, wide-angle distortion shown in fig. 1 (a), perspective distortion shown in fig. 1 (c) and fig. 1 (e) occur during photographing. In the embodiment shown in fig. 2, the mobile phone 100 may execute the image processing method provided in the embodiment of the present application, when a user uses a front camera of the mobile phone 100 to take a photograph, the mobile phone 100 can correct perspective deformation and wide-angle deformation existing in the photographed image, no matter how far or near the photographed distance or the angle of view, and finally obtain a corrected image, for example, an image closer to the real state of the user shown in fig. 1 (d).
Specifically, for example, in the embodiment shown in fig. 3 (a), icons of a plurality of applications are included on the main screen of the mobile phone 100, including icons of camera APP (Application), when the user acts on the camera APP of the mobile phone 100. For example, when the user clicks the camera APP by a finger or the user sends a command to turn on the camera APP to the mobile phone 100 by voice, the mobile phone 100 detects a click operation on the camera APP or receives a command to turn on the camera APP, starts the front camera, and displays a view interface, for example, a currently photographed portrait and landscape in the view interface shown in fig. 3 (b). When the user clicks the photographing control, an image close to the user's real state as shown in fig. 3 (c) is finally photographed.
It can be appreciated that the image processing method provided by the present application is not limited to correcting the image captured by the front camera of the mobile phone 100. In some embodiments, when the user uses the mobile phone 100 to take a picture, the mobile phone 100 may also activate a rear camera, and take a picture of the user and the environment in which the user is located through the rear camera. The mobile phone 100 can also use the image processing method provided by the application to correct the perspective deformation and the wide-angle deformation existing in the image shot by the user through the rear camera, so as to obtain the corrected image.
In addition, it can be appreciated that in some embodiments, the mobile phone 100 may directly correct the perspective distortion and the wide-angle distortion existing in the photographed image, and when the user clicks the camera APP to perform the photographing operation, the mobile phone 100 directly displays the corrected image on the screen of the mobile phone 100. In other embodiments, when the mobile phone 100 recognizes that there is perspective deformation or wide-angle deformation in the photographed image, a prompt message may be output to allow the user to select whether correction is needed, if so, the mobile phone 100 corrects the perspective deformation and the wide-angle deformation in the image according to the image processing method provided by the embodiment of the present application, and finally outputs the corrected image; if correction is not required, the handset 100 outputs an uncorrected image.
In addition, it can be appreciated that, in some embodiments, the mobile phone 100 may also correct the image with perspective distortion or wide-angle distortion stored in the gallery according to the image processing method provided by the embodiment of the present application. The images stored in the gallery may be uncorrected images captured by the mobile phone 100, or may be images downloaded from a network in which distortion exists (e.g., images downloaded from a search engine, images downloaded from chat software, etc.). Specifically, for example, in the embodiment shown in fig. 4 (a), icons of a plurality of applications are included on the home screen of the mobile phone 100, including icons of a gallery. When the user acts on the icon of the gallery of the mobile phone 100, for example, the user clicks the icon of the gallery by a finger, or the user sends a command for opening the gallery to the mobile phone 100 by voice, the mobile phone 100 detects a clicking operation on the icon of the gallery or receives the command for opening the gallery, opens the gallery, and displays an image of the gallery. For example, in the interface shown in fig. 4 (b), the gallery includes three images, which are an image of a female and an image of two males, respectively, wherein the female's figures show a relatively uncoordinated state, and the central area of the face (around the nose) shows a swollen state, especially the central point nose is most remarkable (the nose is obviously oversized), the face shape is deformed (the face shape is slender), and in addition, the ears far from the lens are smaller, and the edges of the face tend to disappear, namely, perspective deformation occurs. When the user selects to use the mobile phone 100 to correct the image of the female through the image processing method provided in the embodiment of the present application, the mobile phone 100 outputs the corrected image as shown in fig. 4 (c).
In addition, it can be understood that the image processing method provided by the embodiment of the application includes, but is not limited to, correcting perspective deformation and wide-angle deformation of a human figure, and the method can also correct perspective deformation and wide-angle deformation of animals with strong stereoscopic sense (such as small animals including cats and dogs), objects (such as figure models and animal models) and other objects.
In the image processing method provided by the embodiment of the application, whether the target object (such as a person) exists or not is judged by analyzing the acquired image, and the deformation type of the target object is determined based on the object distance information (namely the distance from the shooting object to the camera) of the acquired image and the position information of the target object in the image under the condition that the target object exists in the acquired image, and the corresponding correction method is adaptively matched based on the determined deformation type. Specifically, when the object distance information of the acquired image exceeds an object distance threshold value, determining that a target object at an image boundary in the image has wide-angle deformation, and determining that the target object at the center of the image in the image has perspective deformation; when the object distance information of the acquired image is within the object distance threshold value range, whether one target object or a plurality of target objects exist in the image, determining that perspective deformation exists in the target object positioned in the center of the image, and determining that wide-angle deformation exists in the target object positioned at the boundary of the image.
After the perspective deformation of the target object in the image is determined, the image is segmented into the foreground and the background, the feature point position information of the foreground is obtained, the object distance of the image and the feature point position information of the foreground are used for estimating the offset of the feature point position information of the target object and the feature point position information of the target object in the middle focus shooting through an optical flow estimation algorithm, and the offset is mapped into the target object in the middle focus shooting. And (3) establishing an energy function to restrict the foreground and the background in the perspective deformed image, and carrying out optimization solution on the energy function, or fusing the foreground and the background in the perspective deformed image by an image fusion method based on deep learning to obtain a final corrected image.
After the perspective deformation of the target object in the image is determined, the object distance and the feature point position information of the foreground of the image can be input to a deformable reference target object (such as a human face) for fitting to obtain a 3D model corresponding to the target object in the image, a virtual camera is then constructed, the 3D model is mapped into a 2-dimensional image, the corrected target object is obtained, and then the foreground and the background in the perspective deformation image are fused through an image fusion method similar to the method, so that the final corrected image is obtained.
After determining that the target object in the image has wide-angle deformation, dividing the image into areas, obtaining a coordinate matrix and a color matrix of the image based on a portrait deformation correction algorithm, setting constraint terms (for controlling the position relation of each area of the image before and after correction) and corresponding weight coefficients for each divided area, restricting equations by the components and solving the constraint terms to obtain a corrected coordinate matrix, filling colors of each pixel point of the corresponding coordinate matrix in the image before correction into each pixel point of the corrected coordinate matrix to obtain a new image, and cutting irregularly-shaped holes (for example, cutting by using a maximum rectangular frame which can be displayed by the image) at the edge of the obtained new image to obtain a final corrected image.
Therefore, the method and the device can adaptively match corresponding correction strategies aiming at the deformation type of the image, and fuse the foreground and the background in the image, so that the corrected image can reflect the real state of the shot object more truly and beautifully, and the user experience is improved.
Specifically, fig. 5 shows a block diagram of an image processing apparatus 500 according to an embodiment of the present application. The image processing apparatus 500 is capable of capturing an image and correcting the captured image when there is a distortion of the image. Specifically, the image processing apparatus 500 includes an acquisition module 510, an image correction pre-processing module 520, an image distortion correction module 530, and an image encoding storage module 540. The acquisition module 510 is used for acquiring images and acquiring object distance information of the images. The image correction preprocessing module 520 is configured to perform noise reduction, enhancement, dynamic range correction, optical distortion correction and the like on the image acquired by the acquisition module 510, so as to suppress additive noise, quantization noise and the like existing in the image before performing image correction by adopting the image processing method provided by the embodiment of the application, correct optical distortion caused by the limitation of the optical module design, improve distortion of the image, and improve final image correction precision. And the collected images are subjected to analysis processing such as face detection, facial feature point detection, facial attitude estimation, human image segmentation and the like, so that information such as the positions of the facial feature points is provided for human image correction by the image processing method provided by the embodiment of the application.
The image distortion correction module 530 is configured to adaptively correct the image processed by the image correction pre-processing module 520 according to the type of distortion in the case where it is determined that there is distortion in the input image. For example, when an image input to the image distortion correction module 530 has perspective distortion, the image distortion correction module 530 corrects the perspective distortion present in the image; when the image input to the image distortion correction module 530 has wide-angle distortion, the image distortion correction module 530 corrects the wide-angle distortion present in the image; when the image input to the image distortion correction module 530 has both perspective distortion and wide-angle distortion, the perspective distortion and the wide-angle distortion present in the image may also be corrected according to a preset order. The image encoding storage module 540 includes an image encoding unit 541 and an image storage unit 542 for encoding, storing, and displaying the image corrected by the image distortion correction module 530 to a gallery.
With continued reference to fig. 5, in the embodiment shown in fig. 5, the acquisition module 510 includes an image acquisition unit 511 and an object distance information acquisition unit 512. The image capturing unit 511 is configured to capture an image, for example, the mobile phone 100 shown in fig. 2 captures an image including a human image of a user through a front camera thereof. The object distance information acquisition unit 512 is used for acquiring object distance information of an image, for example, by a Time of flight (TOF) device or a binocular camera. The deformation type existing in the image is judged according to the object distance information of the image, and correction of the image can be achieved by combining the object distance information of the image. For example, when the object distance of the acquired image exceeds an object distance threshold, determining that a wide-angle deformation exists in a human image at an image boundary in the image; when the object distance of the acquired image is within the object distance threshold value range, whether one or a plurality of images exist in the image, determining that perspective deformation exists in the image at the center of the image, and determining that wide-angle deformation exists in the image at the boundary of the image.
In some embodiments, when the ratio of the number of pixels exceeding the set field angle to the total number of face pixels of the person image in the pixel area corresponding to the face of the person image in the acquired image is greater than the ratio threshold, the person image may be considered to be at the boundary of the image. For example, if the maximum angle of view of the image is 70 degrees, it is considered that the image is at the boundary of the image when the proportion of the number of pixels corresponding to the face of the image exceeding the angle of view by 60 degrees to the total number of face pixels of the image is greater than 10%.
In some embodiments, the position of the portrait in the image may be determined according to the coordinate information, specifically, when the ratio of the number of pixels with coordinates falling into the edge area of the image to the total number of face pixels of the portrait in the pixel area corresponding to the face of the portrait in the acquired image is greater than a ratio threshold, the portrait is considered to be at the boundary of the image. In other embodiments of the present application, the location information of the portrait in the image may also be determined in other manners, which the present application is not limited to.
Further, it is understood that in the acquired image, the other area than the boundary area is the center area of the image.
Specifically, the principle of acquiring object distance information of an image by a TOF device is: by transmitting continuous near infrared pulses to a shooting object (for example, a user currently holding the mobile phone 100 for self-shooting), receiving light pulses reflected by the shooting object through a sensor, and calculating the transmission delay between the light pulses based on the phase difference between the transmitted light pulses and the light pulses reflected by the shooting object, so as to obtain object distance information of the shooting object. The principle of acquiring object distance information of an image through a binocular camera is as follows: calibrating the binocular camera to obtain internal and external parameters of the binocular camera, correcting two images shot by the binocular camera for the same object according to a calibration result, and performing pixel point matching (such as matching by adopting a local matching algorithm, a global matching algorithm, a regional stereo matching algorithm, a feature-based cubic matching algorithm, a phase stereo matching algorithm and the like) on the corrected two images, thereby calculating parallax and obtaining object distance information of the shot images.
The image correction pre-processing module 520 includes an image post-processing unit 521, an image optical distortion correction unit 522. The noise, distortion, etc. existing in the acquired image are preprocessed by the image post-processing unit 521 and the image optical distortion correcting unit 522 to improve the accuracy of the subsequent analysis and correction for the image. The image post-processing unit 521 is configured to perform post-processing such as noise reduction and enhancement on the image acquired by the acquisition module 510. For example, the image acquired by the acquisition module 510 is subjected to removal of gaussian noise, poisson noise, pretzel noise and the like existing in the process of acquisition and transmission by using a mean value filtering algorithm, a median value filtering algorithm, an adaptive wiener filtering algorithm and the like. The gray values of all pixels in the acquired image are adjusted by gray level transformation enhancement, histogram enhancement and other methods so as to highlight interesting features (such as five sense organs, facial forms and other features of a portrait) in the image, inhibit uninteresting features (such as background features in the image), improve the visual effect of the image and improve definition. The image optical distortion correction unit 522 corrects optical distortion such as barrel distortion of the captured image due to limitation of the lens optical design of the optical module by an optical distortion correction algorithm (for example, zhang Zhengyou algorithm) to improve distortion of the image.
The image correction pre-processing module 520 further includes an image analysis unit 523. The acquired image is analyzed and processed by the image analysis unit 523 to obtain the position information of the face frame, the position of the facial feature point and other information, and the processing result is output to the image deformation correction module 530 for correction. For example, the image analysis unit 523 performs face detection by a CV (Computer Vision) algorithm based on object distance learning, and obtains information such as the position of a face frame. And facial feature points are detected on the basis of face detection, for example, feature points such as mouth, eyes, nose, ears and the like on a face in a human image are positioned by a detection method based on a shape model and a classifier, a detection method based on a global-to-local regression model and the like, so that information such as positions of the facial feature points is obtained. And carrying out face posture estimation through an Opencv algorithm to obtain angle information of the face orientation. A deep learning method is adopted, and a face is segmented (for example, a nose region, a mouth region, a left eye region, a right eye region, and the like of a person are segmented) based on position information of facial feature points.
With continued reference to fig. 5, in the embodiment shown in fig. 5, the image distortion correction module 530 includes a wide-angle distortion correction unit 531 and a perspective distortion correction unit 532. The wide-angle distortion correction unit 531 is configured to correct wide-angle distortion existing in an image based on the image processing method provided by the embodiment of the present application; the perspective distortion correction unit 532 is configured to correct perspective distortion existing in an image based on the image processing method provided by the embodiment of the present application. It is to be understood that the wide-angle distortion correction unit 531 and the perspective distortion correction unit 532 may also be combined into one module, or divided into more modules. The image distortion correction module 530 may also correct the wide-angle distortion and the perspective distortion that exist in the image at the same time according to a preset sequence, for example, the image distortion correction module 530 may correct the perspective distortion that exists in the image first, correct the image corrected by the perspective distortion to the wide-angle distortion, correct the wide-angle distortion that exists in the image first, correct the image corrected by the wide-angle distortion to the perspective distortion first, and correct the perspective distortion to the perspective distortion simultaneously by the image distortion correction module 530. The present solution is not limited in this regard.
It should be understood that each module in the image processing apparatus 500 provided in the embodiment of the present application is a module divided according to functions, and in other embodiments, the image processing apparatus 500 may include more or less modules than those illustrated, or some modules may be combined, some modules may be split, or different modules may be combined. The configuration of the image processing apparatus 500 illustrated in the embodiment of the present application does not constitute a specific limitation of the image processing apparatus 500.
The correction principles of the wide-angle distortion correction unit 531 and the perspective distortion correction unit 532 are described in detail below.
First, the principle of the wide-angle distortion correction unit 531 correcting wide-angle distortion present in a person image will be described with reference to fig. 5 and 6.
In the embodiment of the application, the image with wide-angle deformation is subjected to region division so as to independently optimize each region of the image. Specifically, a coordinate matrix and a color matrix of the image may be obtained based on a portrait deformation correction algorithm, and constraint items may be set for the divided respective areas (to control the positional relationship of the respective areas of the image before and after correction). And setting corresponding weight coefficients for the constraint terms, constructing a constraint equation composed of each constraint term, the weight coefficients and a displacement matrix (a displacement matrix obtained based on the coordinate matrix of the image before correction and the coordinate matrix of the image after correction expected to be obtained), solving the equation, and solving the displacement matrix to obtain the corrected coordinate matrix. And filling the color information of each pixel point of the corresponding coordinate matrix in the image before correction to each pixel point of the coordinate matrix after correction to obtain a new image, and cutting irregularly-shaped holes (for example, cutting by using the maximum rectangle which can be displayed by the image) existing at the edge of the obtained new image to obtain a final corrected image.
Specifically, fig. 6 (a) shows a process in which the wide-angle distortion correction unit 531 corrects wide-angle distortion present in a person image in some embodiments. As shown in fig. 6 (a), the process of correcting wide-angle distortion present in a person image by the wide-angle distortion correction unit 531 includes:
a) The wide-angle distortion correction module 531 receives an image (for example, an image shown in fig. 6 (b), which includes three images in which two images at the edge are significantly stretched and deformed, which is hereinafter referred to as a "wide-angle distorted image" for convenience of description) photographed at a large angle of view processed by the image correction preprocessing module 520, and marks the received wide-angle distorted image sub-areas, and converts the image into a coordinate matrix and a color matrix according to an image distortion correction algorithm.
Specifically, in some embodiments, in order to correct the portrait in the wide-angle deformed portrait and ensure that the background does not generate serious distortion in the process of correcting the portrait, it is necessary to mark the wide-angle deformed portrait to realize regional division. For example, in the embodiment shown in fig. 6 (b), the wide-angle deformed portrait is marked with the following areas: a human face region (face region 1-3), a human body region (three of the wide-angle deformed figures shown in fig. 6 (b) are regions other than the respective face regions, that is, body region 1 to body region 3), a field-of-view edge region, a background region, an ineffective region (for example, an irregular cavity region of the wide-angle deformed figure edge), and the like. Wherein the human face area encompasses the head and neck of everyone in the image and associated decorations (e.g., hats, glasses, etc.); the human body region encompasses the torso and extremities of each person in the image; the edge area of the view field is an edge annular strip of an effective area in the image; the background area is an effective area outside the face area, the body area, the field edge area and the ineffective area of the image.
The image is then converted into a different coordinate matrix and color matrix according to a portrait deformation correction algorithm (e.g., a spherical projection algorithm and a mercator projection algorithm, as well as an adaptive spherical projection algorithm and an adaptive mercator algorithm evolved therefrom, etc.). The coordinate matrix is used for recording the position of each pixel point in the image, for example, the plane position coordinate of the pixel point in the j-th row and the i-th column in the image is expressed as M (j, i) = (u (j, i), v (j, i)), wherein u (j, i) and v (j, i) respectively represent the abscissa and the ordinate; the color matrix is used for recording color information of each pixel, for example, the color of the ith row and column of the pixel in the image is C (j, i) = (r (j, i), g (j, i), b (j, i)), where r (j, i), g (j, i), b (j, i) respectively represent Red, green and Blue color information of the ith row and column of the pixel in the RGB (Red Green Blue) color space.
Assuming that the width and the height of the wide-angle deformed portrait shown in fig. 6 are W and H, respectively, for convenience of calculation, the coordinates of the wide-angle deformed portrait may be normalized, where the normalized coordinates are: u0= (i-W/2)/W, v0= (H/2-j)/H, the coordinate matrix of the wide-angle deformed portrait is expressed as M0: m0 (j, i) = (u 0 (j, i), v0 (j, i)). Since the human face and body have different volume, the tolerance of the user to deformation is different, so that different correction algorithms can be used to correct the human face and body respectively. For example, the face distortion of a person is corrected by a spherical projection algorithm, the body distortion of a person is corrected by an ink-card-based projection algorithm, and the like, and a coordinate matrix mapped to each region of the mark in the wide-angle distorted figure shown in fig. 6 (b) is obtained.
B) The wide-angle distortion correction module 531 sets constraint terms for the portrait region, the field-of-view edge region, and the background region, respectively, to control scaling and displacement of the portrait, stretching and distortion of the boundary, the background, and the like. Wherein the constraint item is set by data representation of the position coordinates of each region of the corrected image which is expected to be obtained. And setting corresponding weight coefficients for the constraint terms, and constructing constraint equations based on the constraint terms and the weight coefficients and related to a coordinate matrix and a displacement matrix of the image before correction. Wherein the displacement matrix is determined based on the coordinate matrix of the image before correction and the deviation of the respective position coordinates of the coordinate matrix of the image after correction which is expected to be obtained. Solving the equation, and solving a displacement matrix to obtain a corrected coordinate matrix.
Obtaining a displacement matrix according to the normalization matrix and the coordinate matrix obtained through the portrait deformation correction algorithm, and finally solving to obtain a displacement matrix Dt (j, i) by using a least square method or a gradient descent method according to the set constraint item and the weight coefficient, so that the coordinate matrix Mt (j, i) of the corrected image can be expressed as Mt (j, i) =m0 (j, i) +dt (j, i).
C) The wide-angle distortion correction module 531 obtains a new image based on the corrected coordinate matrix and color matrix. Specifically, after the coordinate matrix Mt (j, i) of the corrected image is obtained, the color C (j, i) of each pixel of the coordinate matrix M0 (j, i) of the wide-angle distorted image before correction is filled into the pixel of the corrected image, and an image with a resolution wt×ht can be obtained.
D) The wide-angle distortion correction module 531 cuts the image, outputting a rectangular image of the target resolution. In order to obtain a rectangular image with resolution wt×ht, the corrected image may be clipped according to a rectangular frame with resolution wt×ht to clip the voids existing in the image, and then the clipped image may be enlarged to obtain a corrected image shown in fig. 6 (b), and it may be obvious that the stretching deformation existing in the input wide-angle deformation map has been corrected.
The principle of the perspective distortion correction module 532 correcting the perspective distortion present in the portrait will be described below with reference to fig. 5 and 7.
In the embodiment of the application, the human face and the background in the perspective deformed human image are segmented, the characteristic points of the human face are detected, information such as the characteristic point positions of the human face is obtained, the object distance information of the image and the position information of the characteristic points of the human face are used for estimating the offset of the characteristic point positions of the human face of the undeformed image shot by the human face and the middle focus through an optical flow estimation algorithm, and the offset is input into a pre-trained neural network model, so that the human image shot by the middle focus corresponding to the perspective deformed human image is obtained according to the offset. And fusing the face and the background in the perspective deformed image by an image fusion method based on deep learning to obtain a final corrected image. The face of the undeformed image photographed by the middle focus is an image actually photographed by a person in the perspective deformed image to be corrected at a middle focus section (for example, a focal length of 50mm to 80 mm).
In particular, fig. 7 (a) illustrates a process by which the perspective distortion correction module 532 corrects for perspective distortion present in a perspective distorted portrait in some embodiments. Specifically, the correction process of perspective distortion includes:
a) The perspective distortion correcting unit 532 receives the near-focus photographed image processed by the image correction preprocessing module 520 (as shown in fig. 1 (c), compared with the real state, the perspective distortion of the human image, such as the enlarged nose, the reduced ears, the elongated face shape, etc., occurs due to the closer photographing distance leave school; or a perspective deformed portrait in which the five sense organs become flattened due to a long photographing distance as shown in fig. 1 (e). For convenience of description, the image is hereinafter referred to as "perspective deformed figure"), and object distance information (i.e., depth information) of the perspective deformed figure acquired by a TOF device or a binocular camera, and a figure region and a background region in the perspective deformed figure are segmented. In some embodiments, the perspective distortion correction unit 532 may divide the image and the background in the perspective distortion image processed by the image correction preprocessing module 520 based on the K-means, watershed or GrabCut algorithm or based on the image division and the instance division of deep learning, and detect the facial feature points in the perspective distortion image, for example, locate the feature points such as the mouth, the eyes, the nose, the ears on the face in the image by a detection method based on a shape model and a classifier, so as to obtain the information such as the position of the face feature points.
B) The perspective distortion correction unit 532 processes the positions of the detected face feature points and the acquired object distance information of the perspective distortion correction portrait by an optical flow estimation algorithm to obtain the offset of the feature point positions of the perspective distortion portrait and the feature point positions of the portrait photographed in the middle focus. Specifically, in some embodiments, the offset of each feature point in the perspective deformed portrait and the position of the corresponding feature point of the portrait photographed in the mid-focus section may be estimated by Lucas-Kanade algorithm, flowNet algorithm (an optical flow estimation algorithm based on object distance learning), or the like. The image photographed by the middle focus Jiao Duan may be an image actually photographed by a person in the perspective deformed image to be corrected under the middle focus section (for example, the focal length is 50mm to 80 mm) before the perspective deformed image is corrected by adopting the image processing method provided by the application.
Because deformation degrees of the human images in the images shot under different object distances can be different, the result of optical flow estimation can be more accurate by combining the object distance information of the perspective deformed human images to perform optical flow estimation. Specifically, the position information of the detected face feature points and the acquired object distance information of the perspective deformation correction portrait may be input into a pre-trained neural network model to perform optical flow estimation, so as to obtain the position information of the face feature points and the feature point position offset of the object distance information corresponding to the image. The neural network model may be a model obtained by training a large number of feature point information of an image having perspective deformation, object distance information of the image, and a mid-focus image corresponding to the person having perspective deformation. It can be understood that when the trained model is used for optical flow estimation, the feature point position information and the object distance information of the person image with perspective deformation are input into the model, and the corresponding feature point offset can be obtained.
C) The perspective distortion correction unit 532 may map the feature point position information of the perspective distorted person image and the offset amount of the feature point position information of the person image photographed at the middle focus to the person image photographed at the middle focus section (i.e., the person image photographed at the middle focus to which the perspective distorted person image corresponds) based on the feature point position information of the perspective distorted person image and the feature point position information of the person image photographed at the middle focus output by the optical flow estimation algorithm. Specifically, in some embodiments, the offset of the feature point position information of the perspective deformed portrait and the feature point position information of the middle-focus photographed portrait output by the optical flow estimation algorithm may be input into a pre-trained neural network model, so as to obtain the middle-focus photographed portrait corresponding to the perspective deformed portrait according to the foregoing offset (it may be understood that the middle-focus photographed portrait has no perspective deformation). The neural network model may be a neural network model obtained by training the feature point position information of the perspective deformed portrait and the feature point position information of the portrait photographed by the middle focus, and the portrait photographed by the middle focus corresponding to the perspective deformed portrait, and it is understood that after the neural network model is trained, the corresponding offset is input, and the portrait photographed by the middle focus corresponding to the offset can be obtained.
D) Finally, the foreground and the background in the perspective deformed portrait are constrained by setting constraint items similar to the wide-angle deformation correction part by combining the mapped middle-focus photographed portrait (namely, the foreground) and the background information (such as background coordinates and color information) of the perspective deformed portrait received by the perspective deformation correction unit 532, and then the corrected image is obtained by performing optimization solution. In other embodiments, the foreground and the background in the perspective deformed portrait can be fused by an image fusion method based on deep learning.
Fig. 7 (b) shows a process in which the perspective distortion correction unit 532 corrects the perspective distortion existing in the human figure in other embodiments. In the embodiment, the object distance information of the image and the characteristic point position information of the face are input to a deformable reference face for fitting, so that a 3D human head model corresponding to the face is obtained, and the 3D human head model is an undeformed 3D human head model corresponding to the perspective deformed human image. Then constructing a virtual camera, mapping an undeformed 3D human head model corresponding to the perspective deformed human image into an undeformed 2-dimensional image corresponding to the perspective deformed human image, obtaining a corrected human face image, and fusing the human face and the background in the perspective deformed image by an image fusion method based on deep learning to obtain a final corrected image. Specifically, as shown in fig. 7 (b), the method includes:
a) The perspective distortion correcting unit 532 receives the perspective distortion portrait (for example, a portrait in which perspective distortion such as a nose becoming larger, ears becoming smaller, face becoming slender, etc. occurs due to the closer shooting distance leave school than the portrait in the real state shown in fig. 1 (c)) processed by the image correction preprocessing module 520; or a perspective deformed person image in which the five sense organs become flat due to a long shooting distance as shown in fig. 1 (e), and object distance information of the perspective deformed person image obtained by a TOF device or a binocular camera, the perspective deformation correction unit 532 positions feature points such as mouth, eyes, nose, ears, etc. on the face in the perspective deformed person image shown in fig. 1 (c) by a detection method based on a shape model in combination with a classifier, a detection method based on a global to local regression model, etc., to obtain information such as positions of feature points of the face, and then inputs the obtained feature point position information of the face onto a deformable reference face for fitting, specifically, adjusts the deformable reference 3D head model by obtaining an offset of the feature point positions of the perspective deformed person image face and the corresponding feature point positions of the deformable reference face, so as to obtain a 3D head model corresponding to the offset.
B) Then, a virtual camera is constructed, and the 3D human head model corresponding to the perspective deformed human figure shown in the figure 1 (c) is mapped into a 2-dimensional image. In some embodiments, since there may be a displacement between the 2-dimensional image obtained by mapping the 3D head Model and the middle-focus image actually captured in the middle-focus segment and corresponding to the perspective deformed image to be corrected, in order to eliminate such a displacement, the virtual camera pose (e.g. set a reasonable shooting parameter) may be adjusted by using the MVP (Model-View-project) matrix to obtain the face feature point cloud of the target focus segment (i.e. the middle-focus segment), and the displacement vector field of the face region in the 2-dimensional image obtained by mapping the 3D head Model and the face region of the middle-focus segment image is calculated, and the final corrected image is obtained by mapping the displacement vector field. And then establishing some energy functions to restrict the foreground and the background, and carrying out optimization solution on the energy functions, so as to realize fusion of the foreground and the background, or fusing the foreground and the background in the perspective deformed portrait by an image fusion method based on deep learning, thereby obtaining a corrected image shown in the figure 1 (d).
The following describes an image processing method provided by some embodiments of the present application, which is applied to the mobile phone 100 and processes an image captured by the mobile phone 100, and specifically, as shown in fig. 8, the image processing method includes:
1) The cell phone 100 activates the camera to capture an image (800). For example, in some embodiments, when a user clicks the camera APP of the mobile phone 100 with a finger or the user sends a command to open the camera APP to the mobile phone 100 by voice, the mobile phone 100 detects a click operation for the camera APP or receives a command to open the camera APP, starts a front camera or a rear camera, and captures an image of interest to the user when the user clicks the photographing control. It is understood that the subject photographed by the user may include living bodies such as human beings and animals, buildings, natural landscapes, and the like.
2) The handset 100 pre-processes the image captured by the camera (802). In some embodiments, the mobile phone 100 may perform post-processing such as noise reduction, enhancement, etc. on the image captured by the camera, for example, removing gaussian noise, etc. existing in the image by using a mean filtering algorithm, etc. The gray values of all pixels in the shot image are adjusted through gray level transformation enhancement, histogram enhancement and other methods so as to highlight interesting features (such as five sense organs, facial forms and other features of a portrait) in the image, inhibit uninteresting features (such as background features in the image), improve the visual effect of the image and improve definition. And correcting optical distortion such as barrel distortion of the acquired image due to limitation of the optical design of the lens of the optical module through an optical distortion correction algorithm (for example Zhang Zhengyou algorithm), and improving the distortion of the image.
3) It is determined whether or not a portrait is included in the photographed image (804). For example, in some embodiments, detection of a face may be performed by a CV (Computer Vision) algorithm based on deep learning, and it may be determined whether or not a portrait is included in a photographed image. It is also possible to determine whether or not a person is included in the photographed image by the AI living body detection method.
4) If no portrait is included in the captured image, the mobile phone 100 cuts out and outputs the preprocessed image (806). To trim out irregular ineffective areas present in the photographed image. For example, irregular voids in the image after the processing such as noise reduction and optical distortion correction are clipped and output.
5) If the captured image includes a person, a determination is made as to whether the object distance of the image is within an object distance threshold (808). For example, in some embodiments, object distance information for an image is acquired by a TOF device, object distance threshold L= { l|0 cm+.l+.ltoreq.50 cm }. And comparing the object distance of the image with the object distance threshold L, and judging whether the object distance of the image is within the object distance threshold L.
6) If the object distance of the image exceeds the object distance threshold range, the person image in the image is considered to have wide-angle distortion, and the mobile phone 100 corrects the person image having wide-angle distortion (810), wherein the specific correction method is described in the above-mentioned description about the principle part of correcting the wide-angle distortion in the person image by the wide-angle distortion correction unit 531, and will not be repeated herein.
7) If the object distance of the image is within the object distance threshold L, then it is further determined whether the person in the image is at an image boundary (812). To further determine which deformations are present in the image. In some embodiments, the information such as the position of the face frame may be obtained by performing face detection based on a CV (Computer Vision) algorithm of object distance learning, so as to determine whether the portrait in the image is at the image boundary. If it is determined that the portrait is at the edge of the image based on the position information of the portrait, it is assumed that the portrait has wide-angle distortion, and the mobile phone 100 corrects the wide-angle distortion that is present in the portrait (810).
8) If the object distance of the image is within the object distance threshold L and the person is determined to be at the center of the image based on the position information of the person, the person is considered to have perspective distortion, and the mobile phone 100 corrects the perspective distortion that exists in the person (814). For a specific correction method, please refer to the above description of the principle part of the perspective distortion correction unit 532 for correcting the perspective distortion existing in the portrait, and the description is omitted herein.
9) Finally, the handset 100 clips the corrected image and outputs (816). For example, the mobile phone 100 cuts out and outputs an image corrected by the perspective distortion and the wide-angle distortion. It can be appreciated that in some embodiments, the mobile phone 100 may implement fusion of the foreground and the background by establishing some energy functions for the foreground and the background in the image after the perspective deformation and the wide-angle deformation correction and performing optimization solution on the energy functions, or implement fusion of the foreground and the background by an image fusion method based on deep learning. In this way, the mobile phone 100 can adaptively match a corresponding correction strategy according to the deformation type of the portrait in the photographed image, so that the corrected portrait can reflect the real state of the photographed portrait more truly and aesthetically, and the user experience is improved.
In the following, taking an example that the image processing method provided by the present application is applied to the mobile phone 100 to process an image captured by the mobile phone 100, the image processing method provided by other embodiments of the present application is described, compared with the image processing method shown in fig. 8, in the embodiment shown in fig. 9, in addition to determining a deformation type of an image based on object distance information of the image and position information of a person in the image, the deformation type existing in the image is determined by combining the number of person in the image. Specifically, as shown in fig. 9, includes:
1) The handset 100 activates the camera to capture an image (900).
2) The handset 100 pre-processes the image captured by the camera (902). For example, the photographed image is subjected to noise reduction, enhancement, optical distortion correction, and the like.
3) It is determined whether or not a portrait is included in the photographed image (904). For example, in some embodiments, detection of a face may be performed by a CV (Computer Vision) algorithm based on deep learning, and it may be determined whether or not a portrait is included in a photographed image. It is also possible to determine whether or not a person is included in the photographed image by the AI living body detection method.
4) If no portrait is included in the captured image, the mobile phone 100 cuts out and outputs the preprocessed image (906). For example, the image subjected to noise reduction, enhancement, optical distortion correction, and the like is output after clipping.
5) If a person is included in the captured image, a determination is made as to whether the object distance of the image is within an object distance threshold (908). For example, in some embodiments, object distance information (i.e., object distance information) of an image is acquired by a TOF device, the object distance threshold L= { l|0 cm+.l+.ltoreq.50 cm }. And comparing the object distance information of the image with the object distance threshold L, and judging whether the object distance of the image is within the object distance threshold L.
6) If the object distance of the image exceeds the object distance threshold range, the person in the image is considered to have wide-angle distortion, and the mobile phone 100 corrects the wide-angle distortion in the person (910), wherein the specific correction method is described in the above-mentioned principle part of correcting the wide-angle distortion in the person by the wide-angle distortion correction unit 531, and will not be described herein.
7) If the object distance of the image is within the object distance threshold L, then the person image in the image is further judged to be a single person image or a multiple person image (912). For example, the information such as the position and number of face frames is obtained by detecting the face by a CV (Computer Vision) algorithm based on object distance learning. If a face frame exists in the image, the portrait in the image can be considered as a single portrait; if there are two or more face frames in the image, the person in the image can be considered as a multi-person image.
8) If the object distance of the image is within the object distance threshold L and the person in the image is a single person, it is further determined whether the person is at the image boundary or the image center (914). If the portrait is at the image boundary, the mobile phone 100 corrects the wide-angle distortion existing in the portrait (910), wherein the specific correction method is described in the above-mentioned description of the principle part of correcting the wide-angle distortion existing in the portrait by the wide-angle distortion correction unit 531, and is not repeated here.
9) If the object distance of the image is within the object distance threshold L and the person in the image is a single person, the person is at the center of the image, then the person is considered to have perspective distortion, and the mobile phone 100 corrects the perspective distortion that exists in the person (916). For a specific correction method, please refer to the above description of the principle part of the perspective distortion correction unit 532 for correcting the perspective distortion existing in the portrait, and the description is omitted herein.
10 If the object distance of the image is within the object distance threshold L and the person in the image is a multi-person image, further determining whether the person image is at the image boundary or the image center 918. Also, it is possible to determine which of the plurality of images in the image are at the image boundary and which are at the image center by the position information of the face frame.
11 If the object distance of the image is within the object distance threshold L and the person in the image is a multi-person image, the mobile phone 100 corrects the wide-angle distortion in the presence of the person at the image boundary (920). For a specific correction method, please refer to the description of the above principle part of the wide-angle distortion correction unit 531 for correcting the wide-angle distortion existing in the portrait, which is not repeated herein.
12 If the object distance of the image is within the object distance threshold L and the person in the image is a multi-person, the mobile phone 100 corrects the perspective distortion existing in the person at the center of the image (922). For a specific correction method, please refer to the above description of the principle part of the perspective distortion correction unit 532 for correcting the perspective distortion existing in the portrait, and the description is omitted herein.
13 Finally, the mobile phone 100 clips and outputs the corrected image (924). For example, the mobile phone 100 cuts out and outputs an image corrected by the perspective distortion and the wide-angle distortion. It can be appreciated that in some embodiments, the mobile phone 100 may implement fusion of the foreground and the background by establishing some energy functions for the foreground and the background in the image after the perspective deformation and the wide-angle deformation correction and performing optimization solution on the energy functions, or implement fusion of the foreground and the background by an image fusion method based on deep learning. In this way, the mobile phone 100 can adaptively match a corresponding correction strategy according to the deformation type of the portrait in the photographed image, so that the corrected portrait can reflect the real state of the photographed portrait more truly and aesthetically, and the user experience is improved.
The following describes an image processing method provided in some embodiments of the present application, which is applied to a mobile phone 100 and processes images in a gallery of the mobile phone 100, and specifically, as shown in fig. 10, the image processing method includes:
1) The handset 100 opens the gallery (1000). The images stored in the gallery may be uncorrected images captured by the mobile phone 100, or may be images downloaded from a network in which distortion exists (e.g., images downloaded from a search engine, images downloaded from chat software, etc.). Specifically, for example, in some embodiments, when the user clicks an icon of the gallery by a finger or the user sends a command to open the gallery to the mobile phone 100 by voice, the mobile phone 100 detects a click operation on the icon of the gallery or receives the command to open the gallery, opens the gallery, and displays an image of the gallery.
2) The handset 100 pre-processes the image selected by the user in the gallery (1002). In some embodiments, the mobile phone 100 may perform noise reduction, super-resolution, and the like on images selected by the user in the gallery of the mobile phone 100.
3) A determination is made as to whether the user includes a portrait in the image selected in the gallery (1004). For example, in some embodiments, detection of faces may be performed by a deep learning based CV (Computer Vision) algorithm to determine whether a person is included in an image selected by a user in a gallery. It is also possible to determine whether or not the person is included in the image selected by the user in the gallery by the AI-live detection method.
4) If the user does not include a portrait in the image selected by the gallery, the handset 100 directly outputs the image (1006). In some embodiments, the user may also perform noise reduction, super-resolution, and the like on the image selected by the user in the gallery through the mobile phone 100, as needed.
5) If the user includes a person in the image selected in the gallery, a determination is made as to whether the object distance of the image is within an object distance threshold (1008). For example, in some embodiments, approximate object distance information for an image selected by a user in a gallery may be obtained by a deep-learning based discriminant model, with an object distance threshold l= { l|0 cm+.ltoreq.l+.ltoreq.50 cm }. And comparing the object distance information of the image with the object distance threshold L, and judging whether the object distance of the image is within the object distance threshold L.
6) If the object distance of the image exceeds the object distance threshold range, the person image in the image is considered to have wide-angle distortion, and the mobile phone 100 corrects the person image having wide-angle distortion (1010), wherein the specific correction method is described in the above description about the principle part of correcting the wide-angle distortion in the person image by the wide-angle distortion correction unit 531, and will not be repeated herein.
7) If the object distance of the image is within the object distance threshold L, then it is further determined whether the person in the image is at an image boundary (1012). To further determine which deformations are present in the image. In some embodiments, information such as the position of a face frame can be obtained through a face detection algorithm based on deep learning to determine whether a portrait in an image is at an image boundary. If it is determined that the portrait is at the edge of the image based on the position information of the portrait, it is assumed that the portrait has wide-angle distortion, and the mobile phone 100 corrects the wide-angle distortion that is present in the portrait (1010).
8) If the object distance of the image is within the object distance threshold L and the person is determined to be at the center of the image based on the position information of the person, the person is considered to have perspective distortion, and the mobile phone 100 corrects the perspective distortion that is present in the person (1014). For a specific correction method, please refer to the above description of the principle part of the perspective distortion correction unit 532 for correcting the perspective distortion existing in the portrait, and the description is omitted herein.
9) Finally, the handset 100 clips and outputs the corrected image (1016). For example, the mobile phone 100 cuts out and outputs an image corrected by the perspective distortion and the wide-angle distortion. It can be appreciated that in some embodiments, the mobile phone 100 may implement fusion of the foreground and the background by establishing some energy functions for the foreground and the background in the image after the perspective deformation and the wide-angle deformation correction and performing optimization solution on the energy functions, or implement fusion of the foreground and the background by an image fusion method based on deep learning. In this way, the mobile phone 100 can adaptively match the corresponding correction strategy according to the deformation type of the portrait in the image selected by the user in the gallery of the mobile phone 100, so that the corrected portrait can reflect the real state of the photographed portrait more truly and aesthetically, and the user experience is improved.
In the following, the image processing method provided by the present application is applied to the mobile phone 100, and the image processing method provided by other embodiments of the present application is described by taking the processing of the image in the gallery of the mobile phone 100 as an example, and compared with the image processing method shown in fig. 10, in the embodiment shown in fig. 11, the determination of the deformation type existing in the image is implemented by combining the number of the images in addition to determining the deformation type of the image based on the object distance information of the image and the position information of the images in the image. Specifically, as shown in fig. 11, includes:
1) The handset 100 opens the gallery (1100). The images stored in the gallery may be uncorrected images captured by the mobile phone 100, or may be images downloaded from a network in which distortion exists (e.g., images downloaded from a search engine, images downloaded from chat software, etc.). Specifically, for example, in some embodiments, when the user clicks an icon of the gallery by a finger or the user sends a command to open the gallery to the mobile phone 100 by voice, the mobile phone 100 detects a click operation on the icon of the gallery or receives the command to open the gallery, opens the gallery, and displays an image of the gallery.
2) The handset 100 pre-processes the image selected by the user in the gallery (1102). In some embodiments, the mobile phone 100 may perform noise reduction, super-resolution, and the like on images selected by the user in the gallery of the mobile phone 100.
3) A determination is made as to whether the user includes a portrait in the image selected in the gallery (1104). For example, in some embodiments, detection of faces may be performed by a deep learning based CV (Computer Vision) algorithm to determine whether a person is included in an image selected by a user in a gallery. It is also possible to determine whether or not the person is included in the image selected by the user in the gallery by the AI-live detection method.
4) If the user does not include a portrait in the image selected by the user in the gallery, the handset 100 outputs the image selected by the user in the gallery (1106). In some embodiments, the user may also perform noise reduction, super-resolution, and the like on the image selected by the user in the gallery through the mobile phone 100 and output the image.
5) If the user includes a person in the image selected in the gallery, a determination is made as to whether the object distance of the image is within an object distance threshold (1108). For example, in some embodiments, approximate object distance information for an image selected by a user in a gallery may be obtained by a deep-learning based discriminant model, with an object distance threshold l= { l|0 cm+.ltoreq.l+.ltoreq.50 cm }. And comparing the object distance information of the image with the object distance threshold L, and judging whether the object distance of the image is within the object distance threshold L.
6) If the object distance of the image exceeds the object distance threshold range, the person in the image is considered to have wide-angle distortion, and the mobile phone 100 corrects the wide-angle distortion in the person (1110), wherein the specific correction method is described in the above-mentioned principle part of correcting the wide-angle distortion in the person by the wide-angle distortion correction unit 531, and will not be described herein.
7) If the object distance of the image is within the object distance threshold L, then it is further determined whether the person in the image is a single person or multiple person (1112). For example, the information such as the position and the number of face frames is obtained by detecting the face by a face detection algorithm based on deep learning. If a face frame exists in the image, the portrait in the image can be considered as a single portrait; if there are two or more face frames in the image, the person in the image can be considered as a multi-person image.
8) If the object distance of the image is within the object distance threshold L and the person in the image is a single person, it is further determined whether the person is at the image boundary or the image center (1114). If the portrait is at the image boundary, the mobile phone 100 corrects the wide-angle distortion existing in the portrait (1110), wherein the specific correction method is described in the above-mentioned description of the principle part of correcting the wide-angle distortion existing in the portrait by the wide-angle distortion correction unit 531, and is not repeated here.
9) If the object distance of the image is within the object distance threshold L and the person in the image is a single person, the person is at the center of the image, and the person is considered to have perspective distortion, and the mobile phone 100 corrects the perspective distortion that exists in the person (1116). For a specific correction method, please refer to the above description of the principle part of the perspective distortion correction unit 532 for correcting the perspective distortion existing in the portrait, and the description is omitted herein.
10 If the object distance of the image is within the object distance threshold L and the person in the image is a multi-person image, further determining whether the person image is at the image boundary or the image center (1118). Also, it is possible to determine which of the plurality of images in the image are at the image boundary and which are at the image center by the position information of the face frame.
11 If the object distance of the image is within the object distance threshold L and the person in the image is a multi-person, the mobile phone 100 corrects the wide-angle distortion in the presence of the person at the image boundary (1120). For a specific correction method, please refer to the description of the above principle part of the wide-angle distortion correction unit 531 for correcting the wide-angle distortion existing in the portrait, which is not repeated herein.
12 If the object distance information of the image is within the object distance threshold L and the person in the image is a multi-person image, the mobile phone 100 corrects the perspective distortion of the person image at the center of the image (1122). For a specific correction method, please refer to the above description of the principle part of the perspective distortion correction unit 532 for correcting the perspective distortion existing in the portrait, and the description is omitted herein.
13 Finally, the mobile phone 100 clips and outputs the corrected image (1124). For example, the mobile phone 100 cuts out and outputs an image corrected by the perspective distortion and the wide-angle distortion. It can be appreciated that in some embodiments, the mobile phone 100 may implement fusion of the foreground and the background by establishing some energy functions for the foreground and the background in the image after the perspective deformation and the wide-angle deformation correction and performing optimization solution on the energy functions, or implement fusion of the foreground and the background by an image fusion method based on deep learning. In this way, the mobile phone 100 can adaptively match the corresponding correction strategy according to the deformation type of the portrait selected by the user in the gallery of the mobile phone 100, so that the corrected portrait can reflect the real state of the photographed portrait more truly and aesthetically, and the user experience is improved.
Fig. 12 provides a schematic structural diagram of an electronic device 1200, as shown in fig. 12, including:
a first determining module 1202, configured to acquire an image to be processed;
a second determining module 1204, configured to determine that a target to be corrected that needs to perform deformation correction exists in the image to be processed;
a third determining module 1206, configured to determine a deformation type of the target to be corrected;
and the correction module 1208 is configured to perform deformation correction on the target to be corrected in the image to be processed according to the determined deformation type.
Fig. 13 illustrates a possible block diagram of the mobile phone 100 shown in fig. 2, according to an embodiment of the present application. The mobile phone 100 is capable of executing the image processing method provided by the embodiment of the present application. Specifically, as shown in fig. 1, the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 198, an indicator 192, a camera 193, a display 194, a subscriber identity module (subscriber identification module, SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the structure illustrated in the embodiments of the present application is not limited to the specific embodiment of the mobile phone 100. In other embodiments of the application, the handset 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural-Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system. Meanwhile, the processor 110 may also store data received by the handset 100 from other electronic devices. For example, in some embodiments of the present application, the processor 110 may perform noise reduction, enhancement, optical distortion correction, etc. on an image captured by the camera 193 or an image selected by a user in a gallery application of the mobile phone 100, perform face detection, analysis, acquire feature point information of a face, and adaptively adapt a corresponding correction scheme based on a type of distortion (wide-angle distortion, perspective distortion) present in the image, and correct the image.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, a Micro USB interface, a USB Type C interface, etc. The USB interface 130 may be used to connect to a charger to charge the mobile phone 100, or may be used to transfer data between the mobile phone 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present application is only illustrative, and is not limited to the structure of the mobile phone 100. In other embodiments of the present application, the mobile phone 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the above embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The power management module 148 is configured to connect the battery 142, the charge management module 140, and the processor 180. The power management module 148 receives input from the battery 142 and/or the charge management module 140 to power the processor 180, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 148 may also be configured to monitor battery capacity, battery cycle times, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be disposed in the processor 180. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the mobile phone 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like. The handset 100 may communicate wirelessly with other electronic devices, such as with a wearable device or server, through the wireless communication module 160. The mobile phone 100 may send a wireless signal to a server through the wireless communication module 160, requesting the server to perform wireless network services to handle specific service requirements of the electronic device (e.g., requesting the server to perform a movement route recommendation); the cell phone 100 may also receive recommended movement route information from the server through the wireless communication module 160. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied to the handset 100. The mobile phone 100 may acquire map information of the surroundings of the user through the mobile communication module 150. The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., applied to the mobile phone 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, the handset 100 is capable of communication connection with other electronic devices through the mobile communication module 150 or the wireless communication module 160.
In some embodiments, the antenna 1 and the mobile communication module 150 of the handset 100 are coupled, and the antenna 2 and the wireless communication module 160 are coupled, so that the handset 100 can communicate with a network and other devices through wireless communication technology. The wireless communication techniques can include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (GENERAL PACKET radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation SATELLITE SYSTEM, GLONASS), a beidou satellite navigation system (bei dou navigation SATELLITE SYSTEM, BDS), a quasi zenith satellite system (quasi-zenith SATELLITE SYSTEM, QZSS) and/or a satellite based augmentation system (SATELLITE BASED AUGMENTATION SYSTEMS, SBAS).
The mobile phone 100 implements display functions through a GPU, a display 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The mobile phone 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display 194, an application processor, and the like. In some embodiments of the present application, the display 194 is used to display the view interface of the camera APP of the mobile phone 100, the image captured by the mobile phone 100, the image in the gallery of the mobile phone 100, and the image captured by the camera 193 of the mobile phone 100 or the image selected by the user in the gallery APP of the mobile phone 100 after the image processing method provided by some embodiments of the present application is used by the processor 110 of the mobile phone 100.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capabilities of the handset 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store application programs (such as a camera APP, a gallery APP, etc.) required for at least one function of the operating system. The data storage area may store data created during use of the mobile phone 100 (e.g., an image obtained by the processor 110 of the mobile phone 100 performing noise reduction, enhancement, and optical distortion correction on an image captured by the camera 193 or an image selected by a user in a gallery application of the mobile phone 100, and a number of face frames obtained by analyzing a person image in the image, a position of the person image, and position information of each feature point in the face). In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications and data processing of the mobile phone 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The handset 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The handset 100 may receive key inputs, generating key signal inputs related to user settings and function control of the handset 100.
Fig. 14 shows a block diagram of a System on Chip (SoC) in accordance with an embodiment of the present application. In fig. 14, similar parts have the same reference numerals. In addition, the dashed box is an optional feature of a more advanced SoC. In fig. 14, the SoC includes: an interconnect unit 1450 coupled to the application processor 1410; a system agent unit 1470; bus controller unit 1480; an integrated memory controller unit 1440; a set or one or more coprocessors 1420 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; a static random access memory (Static Random Access Memory, SRAM) unit 1430; a direct memory access (Direct Memory Access, DMA) unit 1460. In one embodiment, coprocessor 1420 includes a special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor general purpose computing (General Purpose Computing on GPU, GPGPU), high-throughput MIC processor, embedded processor, or the like.
Embodiments of the disclosed mechanisms may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as a computer program or program code that is executed on a programmable system comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of the present application, a Processing system includes any system having a processor such as, for example, a digital signal processor (DIGITAL SIGNAL Processing, DSP), microcontroller, application SPECIFIC INTEGRATED Circuit (ASIC), or microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in the present application are not limited in scope by any particular programming language. In either case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed over a network or through other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including but not limited to floppy diskettes, optical disks, read-Only memories (CD-ROMs), magneto-optical disks, read Only Memories (ROMs), random access memories (Random access Memory, RAMs), erasable programmable Read-Only memories (Erasable Programmable Read Only Memory, EPROMs), electrically erasable programmable Read-Only memories (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only memories, EEPROMs), magnetic or optical cards, flash Memory, or tangible machine-readable Memory for transmitting information (e.g., carrier waves, infrared signal digital signals, etc.) using the internet in an electrical, optical, acoustical or other form of propagated signal. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the present application, each unit/module mentioned in each device is a logic unit/module, and in physical terms, one logic unit/module may be one physical unit/module, or may be a part of one physical unit/module, or may be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logic unit/module itself is not the most important, and the combination of functions implemented by the logic unit/module is only a key for solving the technical problem posed by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-described device embodiments of the present application do not introduce units/modules that are less closely related to solving the technical problems posed by the present application, which does not indicate that the above-described device embodiments do not have other units/modules.
It should be noted that in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the application.
Claims (12)
1. An image processing method, comprising:
the electronic equipment acquires an image to be processed;
The electronic equipment determines that a target to be corrected, which needs to be subjected to deformation correction, exists in the image to be processed;
The electronic equipment determines the deformation type of the target to be corrected according to the position of the target to be corrected in the image to be processed;
the electronic equipment carries out deformation correction on the image to be processed according to the determined deformation type, and the method comprises the following steps:
Determining the deformation type as first deformation, and performing deformation correction on the image to be processed by the electronic equipment based on a three-dimensional image determined by the characteristic point information of the object to be corrected or based on the position offset of the characteristic point of the object to be corrected in the image to be processed, wherein the object to be corrected corresponding to the first deformation is positioned in the central area of the image to be processed;
And determining the deformation type as second deformation, and correcting the deformation of the image to be processed by the electronic equipment based on the position parameters of each pixel in the image to be processed and the color parameters of each pixel, wherein the target to be corrected corresponding to the second deformation is positioned in the boundary area of the image to be processed.
2. The method of claim 1, wherein the electronic device determining that there is a target to be corrected in the image to be processed that requires deformation correction comprises:
When the electronic equipment determines that the human image exists in the image to be processed and the object distance of the image to be processed is smaller than or equal to an object distance threshold value, the electronic equipment determines that the human image existing in the image to be processed is a target to be corrected, wherein deformation correction is required; or alternatively
And under the condition that the electronic equipment determines that a portrait exists in the image to be processed, the object distance of the image to be processed is larger than the object distance threshold value, and the portrait is positioned at the first position of the image to be processed, the electronic equipment determines that the portrait existing in the image to be processed is a target to be corrected, wherein deformation of the target is required to be corrected.
3. The method of claim 2, wherein the determining, by the electronic device, the type of deformation of the object to be corrected based on the position of the object to be corrected in the image to be processed comprises:
When the object distance of the image to be processed is smaller than or equal to an object distance threshold value and the portrait is at a second position of the image to be processed, the electronic equipment determines that the deformation type of the portrait is first deformation;
when the object distance of the image to be processed is smaller than or equal to an object distance threshold value and the portrait is at a first position of the image to be processed, the electronic equipment determines that the deformation type of the portrait is second deformation;
The electronic equipment determines that the deformation type of the portrait is second deformation under the condition that the object distance of the to-be-processed image is larger than an object distance threshold value and the portrait is at a first position of the to-be-processed image;
and the electronic equipment determines that the deformation type of the portrait is first deformation under the condition that the object distance of the to-be-processed image is larger than an object distance threshold value and the portrait is at a second position of the to-be-processed image.
4. A method according to claim 3, wherein the second location of the image to be processed is an area of the image to be processed other than the first location, and the electronic device determines that the portrait is in the first location of the image to be processed by:
When the proportion of the number of pixels exceeding a set view angle in the pixel area corresponding to the face of the portrait to the total number of face pixels of the portrait is larger than a proportion threshold value, the electronic equipment determines that the portrait is at a first position of the image to be processed; or alternatively
When the proportion of the number of pixels with coordinates falling into the edge area of the image to be processed in the pixel area corresponding to the face of the portrait to the total number of face pixels of the portrait is larger than a proportion threshold value, the electronic equipment determines that the portrait is positioned at the first position of the image to be processed.
5. The method of claim 3, wherein the electronic device determines the type of deformation of the portrait as a first deformation; and
The electronic equipment carries out deformation correction on the image to be processed according to the determined deformation type, and the method comprises the following steps:
the electronic equipment divides the portrait and the background in the image to be processed to obtain a portrait area and a background area;
The electronic equipment acquires facial feature point position information of the person in the person image area;
the electronic equipment calculates the offset between the acquired facial feature point position information and the estimated facial feature point position information of the undeformed human figure through an optical flow estimation algorithm based on the acquired facial feature point position information;
the electronic equipment obtains a portrait area after correcting the portrait in the portrait area based on the obtained facial feature point position information and the calculated offset;
and the electronic equipment fuses the corrected portrait area and the background area to obtain a corrected image corresponding to the image to be processed.
6. The method of claim 3, wherein the electronic device determines the type of deformation of the portrait as a first deformation; and
The electronic equipment carries out deformation correction on the image to be processed according to the determined deformation type, and the method comprises the following steps:
The electronic equipment divides the portrait and the background in the image to be processed to obtain a portrait area and a background area;
The electronic equipment acquires facial feature point position information of the person in the person image area;
The electronic equipment establishes an undeformed three-dimensional head model corresponding to the portrait in the portrait area based on the acquired facial feature point position information and a reference face;
The electronic equipment creates a virtual camera, acquires a two-dimensional image of the three-dimensional head model based on preset shooting parameters by adopting the virtual camera, and takes the two-dimensional image as a portrait area after correcting the portrait in the portrait area; wherein the preset shooting parameters comprise at least one of shooting distance and shooting angle;
and the electronic equipment fuses the corrected portrait area and the background area to obtain a corrected image corresponding to the image to be processed.
7. A method according to claim 3, wherein the electronic device determines the type of deformation of the portrait as a second deformation; and
The electronic equipment carries out deformation correction on the image to be processed according to the determined deformation type, and the method comprises the following steps:
the electronic equipment extracts the position parameter and the color parameter of each pixel in the image to be processed respectively to obtain a first coordinate matrix and a first color matrix;
The electronic equipment constructs a constraint equation corresponding to the first coordinate matrix, and calculates a displacement matrix through the constraint equation, wherein the displacement matrix is a matrix difference between a second coordinate matrix and the first coordinate matrix, and the second coordinate matrix is a matrix formed by position parameters of pixels in the corrected image;
The electronic equipment obtains the second coordinate matrix according to the first coordinate matrix and the displacement matrix;
and the electronic equipment performs color filling on the second coordinate matrix based on the first coordinate matrix and the first color matrix to obtain a corrected image.
8. The method according to any one of claims 5 to 7, wherein the electronic device performs deformation correction on the target to be corrected in the image to be processed according to the determined deformation type, including:
The electronic equipment performs preprocessing on the image to be processed to obtain a preprocessed image corresponding to the image to be processed, wherein the preprocessing comprises noise reduction and/or optical distortion correction;
And the electronic equipment carries out deformation correction on the target to be corrected in the preprocessed image according to the determined deformation type.
9. The method of claim 1, wherein the electronic device acquiring the image to be processed comprises:
The electronic equipment responds to a shooting instruction of a user of the electronic equipment, and an image acquired by a camera of the electronic equipment is used as the image to be processed; or alternatively, the first and second heat exchangers may be,
And the electronic equipment acquires the image to be processed from a gallery of the electronic equipment.
10. An electronic device, comprising:
the first determining module is used for acquiring an image to be processed; the second determining module is used for determining that a target to be corrected which needs to be subjected to deformation correction exists in the image to be processed;
The third determining module is used for determining the deformation type of the target to be corrected according to the position of the target to be corrected in the image to be processed;
the correction module is used for carrying out deformation correction on the target to be corrected in the image to be processed according to the determined deformation type; wherein,
Under the condition that the deformation type is first deformation, the correction module corrects the deformation of the target to be corrected in the image to be processed based on a three-dimensional image determined by the characteristic point information of the target to be corrected or based on the position offset of the characteristic point of the target to be corrected in the image to be processed, wherein the target to be corrected corresponding to the first deformation is positioned in the central area of the image to be processed;
And under the condition that the deformation type is second deformation, the correction module corrects the deformation of the target to be corrected of the image to be processed based on the position of each pixel in the image to be processed and the color parameter of each pixel, wherein the target to be corrected corresponding to the second deformation is positioned in the boundary area of the image to be processed.
11. A computer readable medium having stored thereon instructions which, when executed on a computer, cause the computer to perform the image processing method of any of claims 1 to 9.
12. An electronic device, comprising:
a memory for storing instructions for execution by one or more processors of the electronic device, and
A processor, being one of the processors of an electronic device, for performing the image processing method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010450719.0A CN113724140B (en) | 2020-05-25 | 2020-05-25 | Image processing method, electronic device, medium and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010450719.0A CN113724140B (en) | 2020-05-25 | 2020-05-25 | Image processing method, electronic device, medium and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113724140A CN113724140A (en) | 2021-11-30 |
CN113724140B true CN113724140B (en) | 2024-08-13 |
Family
ID=78671791
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010450719.0A Active CN113724140B (en) | 2020-05-25 | 2020-05-25 | Image processing method, electronic device, medium and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113724140B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114612324B (en) * | 2022-03-09 | 2024-06-07 | Oppo广东移动通信有限公司 | Image processing method and device for distortion correction, medium and electronic equipment |
CN117351156B (en) * | 2023-12-01 | 2024-03-22 | 深圳市云鲸视觉科技有限公司 | City real-time digital content generation method and system and electronic equipment thereof |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111105367A (en) * | 2019-12-09 | 2020-05-05 | Oppo广东移动通信有限公司 | Face distortion correction method and device, electronic equipment and storage medium |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101879859B1 (en) * | 2013-06-01 | 2018-07-19 | (주)지오투정보기술 | A image processing system for correcting camera image using image distortion parameter |
CN104822030B (en) * | 2015-04-16 | 2017-10-17 | 北京理工大学深圳研究院 | A kind of squaring antidote of irregular video based on anamorphose |
CN105827899A (en) * | 2015-05-26 | 2016-08-03 | 维沃移动通信有限公司 | Method and device for correcting lens distortion |
JP2017028583A (en) * | 2015-07-24 | 2017-02-02 | キヤノン株式会社 | Image processor, imaging apparatus, image processing method, image processing program, and storage medium |
CN108932698B (en) * | 2017-11-17 | 2021-07-23 | 北京猎户星空科技有限公司 | Image distortion correction method, device, electronic equipment and storage medium |
CN109360254B (en) * | 2018-10-15 | 2023-04-18 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN110189269B (en) * | 2019-05-23 | 2023-06-09 | Oppo广东移动通信有限公司 | Correction method, device, terminal and storage medium for 3D distortion of wide-angle lens |
CN110751602B (en) * | 2019-09-20 | 2022-09-30 | 北京迈格威科技有限公司 | Conformal distortion correction method and device based on face detection |
CN111080544B (en) * | 2019-12-09 | 2023-09-22 | Oppo广东移动通信有限公司 | Face distortion correction method and device based on image and electronic equipment |
-
2020
- 2020-05-25 CN CN202010450719.0A patent/CN113724140B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111105367A (en) * | 2019-12-09 | 2020-05-05 | Oppo广东移动通信有限公司 | Face distortion correction method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113724140A (en) | 2021-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107852533B (en) | Three-dimensional content generation device and three-dimensional content generation method thereof | |
EP3432201B1 (en) | Method of living body detection and terminal device | |
CN109993115B (en) | Image processing method and device and wearable device | |
JP6685827B2 (en) | Image processing apparatus, image processing method and program | |
CN110020578A (en) | Image processing method, device, storage medium and electronic equipment | |
WO2020237565A1 (en) | Target tracking method and device, movable platform and storage medium | |
WO2022001806A1 (en) | Image transformation method and apparatus | |
WO2018233217A1 (en) | Image processing method, device and augmented reality apparatus | |
CN113724140B (en) | Image processing method, electronic device, medium and system | |
WO2020249076A1 (en) | Face calibration method and electronic device | |
CN115526983B (en) | Three-dimensional reconstruction method and related equipment | |
CN114549718A (en) | Rendering method and device of virtual information, augmented reality device and storage medium | |
US20200364832A1 (en) | Photographing method and apparatus | |
KR20200100020A (en) | Three dimensional content producing apparatus and three dimensional content producing method thereof | |
CN114187166A (en) | Image processing method, intelligent terminal and storage medium | |
CN113850709A (en) | Image transformation method and device | |
CN113711123B (en) | Focusing method and device and electronic equipment | |
CN110956571A (en) | SLAM-based virtual-real fusion method and electronic equipment | |
CN116109828B (en) | Image processing method and electronic device | |
EP4231621A1 (en) | Image processing method and electronic device | |
CN111385481A (en) | Image processing method and device, electronic device and storage medium | |
KR101507410B1 (en) | Live make-up photograpy method and apparatus of mobile terminal | |
CN116437198B (en) | Image processing method and electronic equipment | |
JP7057086B2 (en) | Image processing equipment, image processing methods, and programs | |
WO2022218216A1 (en) | Image processing method and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |