CN111105369B - Image processing method, image processing apparatus, electronic device, and readable storage medium - Google Patents
Image processing method, image processing apparatus, electronic device, and readable storage medium Download PDFInfo
- Publication number
- CN111105369B CN111105369B CN201911253780.XA CN201911253780A CN111105369B CN 111105369 B CN111105369 B CN 111105369B CN 201911253780 A CN201911253780 A CN 201911253780A CN 111105369 B CN111105369 B CN 111105369B
- Authority
- CN
- China
- Prior art keywords
- image
- repaired
- feature
- images
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 87
- 238000003672 processing method Methods 0.000 title claims abstract description 33
- 238000012216 screening Methods 0.000 claims description 114
- 239000013598 vector Substances 0.000 claims description 42
- 238000000034 method Methods 0.000 claims description 17
- 238000003384 imaging method Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 238000011176 pooling Methods 0.000 claims description 8
- 230000001815 facial effect Effects 0.000 claims description 6
- 238000007493 shaping process Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 4
- 210000000697 sensory organ Anatomy 0.000 claims description 2
- 230000008439 repair process Effects 0.000 abstract description 21
- 230000000694 effects Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 13
- 210000001508 eye Anatomy 0.000 description 9
- 238000000605 extraction Methods 0.000 description 6
- 210000004709 eyebrow Anatomy 0.000 description 6
- 210000000887 face Anatomy 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 210000001331 nose Anatomy 0.000 description 5
- 239000002131 composite material Substances 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000036578 sleeping time Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The application discloses an image processing method, an image processing device, an electronic apparatus and a computer readable storage medium. The image processing method comprises the steps of obtaining an image to be repaired, wherein the image to be repaired contains a human face; finding out an image with the definition being greater than a first threshold value and the face being most similar to the face in the image to be repaired, and taking the image as a reference image; and processing the image to be repaired according to the reference image to obtain a repair image. According to the image processing method, the image processing device, the electronic equipment and the computer readable storage medium, an image with the face most similar to the face in the image to be repaired and the definition larger than the first threshold value is screened from the album to serve as a reference image, the reference image is utilized to repair the image to be repaired to obtain the repaired image, the image quality of the repaired image is improved, and meanwhile the repairing effect of the face characteristics in the repaired image is improved.
Description
Technical Field
The present application relates to the field of image processing technology, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer readable storage medium.
Background
In the shooting process, multiple frames of preview images cached in shooting can be synthesized, and finally, one frame of image with high definition is output. However, in an actual application scene, the problems of shake of a mobile phone of a user, insufficient or too strong ambient light of a shot portrait, and the like all cause that a cached multi-frame preview image is blurred, and an image synthesized by the multi-frame preview image is blurred, so that the quality of an image presented to the user is not high, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an electronic device and a computer readable storage medium.
The image processing method comprises the following steps: acquiring an image to be repaired, wherein the image to be repaired contains a human face; finding out an image with the definition being greater than a first threshold value and the face being most similar to the face in the image to be repaired, and taking the image as a reference image; and processing the image to be repaired according to the reference image to obtain a repair image.
The image processing device comprises an acquisition module, a screening module and a processing module. The acquisition module is used for acquiring an image to be repaired, wherein the image to be repaired contains a human face. The screening module is used for finding out an image with the definition larger than a first threshold value and the most similar face to the face in the image to be repaired from the album, and taking the image as a reference image. The processing module is used for processing the image to be repaired according to the reference image so as to obtain a repair image.
The electronic equipment includes casing, image device and treater, image device with the treater is all installed on the casing, image device is used for taking the image, the treater is used for: acquiring an image to be repaired, wherein the image to be repaired contains a human face; finding out an image with the definition being greater than a first threshold value and the face being most similar to the face in the image to be repaired, and taking the image as a reference image; and processing the image to be repaired according to the reference image to obtain a repair image.
The application provides a computer readable storage medium. A computer readable storage medium has stored thereon a computer program, the computer program being executable by a processor to implement: acquiring an image to be repaired, wherein the image to be repaired contains a human face; finding out an image with the definition being greater than a first threshold value and the face being most similar to the face in the image to be repaired, and taking the image as a reference image; and processing the image to be repaired according to the reference image to obtain a repair image.
According to the image processing method, the image processing device, the electronic equipment and the computer readable storage medium, an image with the most similar face and the definition larger than the first threshold value in the image to be repaired is screened from the photo album to serve as a reference image, and the reference image is utilized to repair the image to be repaired to obtain a repaired image, so that on one hand, compared with the case that the self-blurred preview frame is utilized to synthesize an output image, the definition of the repaired image obtained by repairing the image to be repaired by utilizing the reference image is higher, namely the image quality is higher; on the other hand, the human face in the reference image is most similar to the human face in the image to be repaired, so that the human face characteristics in the repaired image obtained after the repair are more true, and the repair effect is best; in still another aspect, the restoration of the image to be restored by using the reference image is not limited to being performed during shooting, but may be performed during post editing.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 3 is a schematic diagram of the structure of an electronic device in accordance with certain embodiments of the present application;
FIG. 4 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 5 is a schematic diagram of an acquisition module in an image processing apparatus according to some embodiments of the present application;
FIG. 6 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 7 is a schematic diagram of an acquisition module in an image processing apparatus according to some embodiments of the present application;
FIG. 8 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 9 is a schematic diagram of an acquisition module in an image processing apparatus according to some embodiments of the present application;
FIG. 10 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 11 is a schematic diagram of a second acquisition unit in an image processing apparatus according to some embodiments of the present application;
FIG. 12 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 13 is a schematic diagram of a screening module in an image processing apparatus according to some embodiments of the present application;
FIG. 14 is a flow chart of an image processing method of some embodiments of the present application;
FIG. 15 is a schematic diagram of a detection unit in an image processing apparatus according to some embodiments of the present application;
FIG. 16 is a schematic illustration of an extracted face feature vector model according to some embodiments of the present application;
FIG. 17 is a flow chart of an image processing method of some embodiments of the present application;
FIG. 18 is a schematic diagram of a processing module in an image processing apparatus according to some embodiments of the present application;
FIG. 19 is a schematic diagram of generating content features in accordance with some embodiments of the application;
FIG. 20 is a schematic representation of generating texture features in accordance with certain embodiments of the present application;
FIG. 21 is a schematic representation of texture feature mapping to content features in accordance with some embodiments of the application;
FIG. 22 is a schematic diagram of interactions of a computer-readable storage medium with a processor of some embodiments of the application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the embodiments of the present application and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1, the present application provides an image processing method, which includes:
01: acquiring an image to be repaired, wherein the image to be repaired contains a human face;
02: finding out an image with the definition larger than a first threshold value and the most similar human face to the human face in the images to be repaired from the album, and taking the image as a reference image; and
03: And processing the image to be restored according to the reference image to obtain a restored image.
Referring to fig. 1 and fig. 2, the present application further provides an image processing apparatus 100, where the image processing apparatus 100 includes an acquisition module 11, a screening module 12, and a processing module 13. The image processing apparatus 100 may be used to implement the image processing method provided by the present application, step 01 may be performed by the acquisition module 11, step 02 may be performed by the screening module 12, and step 03 may be performed by the processing module 13. That is, the acquiring module 11 may be configured to acquire an image to be repaired, where the image to be repaired includes a face. The filtering module 12 may be configured to find, in the album, an image whose sharpness is greater than a first threshold and whose face is most similar to a face in the image to be restored, as the reference image. The processing module 13 may be configured to process the image to be repaired according to the reference image to obtain a repair image.
Referring to fig. 1 and 3, the present application further provides an electronic device 200, where the electronic device 200 includes a housing 210, an imaging device 220, and a processor 230. The imaging device 220 and the processor 230 are both installed on the housing 210, the imaging device 220 is used for capturing images, the processor 230 can also implement the image processing method provided by the present application, and steps 01, 02 and 03 can be implemented by the processor 230. That is, the processor 230 may be configured to: acquiring an image to be repaired, wherein the image to be repaired contains a human face; finding out an image with the definition larger than a first threshold value and the most similar human face to the human face in the images to be repaired from the album, and taking the image as a reference image; and processing the image to be restored according to the reference image to obtain a restored image.
According to the image processing method, the image processing device 100, the electronic equipment 200 and the computer readable storage medium, an image with the most similar human face and the definition larger than the first threshold value in the image to be repaired is screened from the photo album to serve as a reference image, and the reference image is utilized to repair the image to be repaired to obtain a repaired image, on one hand, compared with the case that the image to be repaired is synthesized and output by using a preview frame which is blurred in itself, the definition of the repaired image obtained by repairing the image to be repaired by utilizing the reference image is higher, namely the image quality is higher; on the other hand, the human face in the reference image is most similar to the human face in the image to be repaired, so that the human face characteristics in the repaired image obtained after the repair are more true, and the repair effect is best; in still another aspect, the restoration of the image to be restored by using the reference image is not limited to being performed during shooting, but may be performed during post-editing.
Wherein the album is an area for storing images in the electronic device 200, in which a plurality of photos (images) such as scenery, photos including a face, animal photos, etc. are stored, in which at least one image including a face and having a sharpness greater than a first threshold is included.
Referring to fig. 1, 4 and 5, step 01 includes:
011: acquiring an original image with a portrait;
012: acquiring definition of an original image; and
013: And determining the original image with the definition smaller than a second threshold value as an image to be repaired, wherein the second threshold value is smaller than the first threshold value.
In some embodiments, the acquisition module 11 may further include a first acquisition unit 111, a second acquisition unit 112, and a determination unit 113, wherein the step 011 may be performed by the first acquisition unit 111; step 012 may be performed by a second acquisition unit; step 013 may be performed by the determination unit. That is, the first acquisition unit 111 may be used to acquire an original image having a portrait; the second acquisition unit 112 may be used to acquire the sharpness of the original image; the determining unit 113 may be configured to determine an original image having a sharpness smaller than a second threshold value, which is smaller than the first threshold value, as the image to be repaired.
Referring to fig. 3, in some embodiments, steps 011, 012, and 013 may all be implemented by the processor 230, that is, the processor 230 may be configured to: acquiring an original image with a portrait; acquiring definition of an original image; and determining the original image with the definition smaller than a second threshold value as the image to be repaired, wherein the second threshold value is smaller than the first threshold value.
Specifically, the original image may refer to an image stored in an album or an image directly captured by the camera 221, and the original image may be one or more, and the plurality may refer to two or more. Firstly, the definition of each original image is acquired, and the definition of each original image is compared with a second threshold value, when the definition is smaller than the second threshold value, the definition of the original image is lower, and when the definition is fuzzy, the original image needs to be repaired, so that the original image is determined to be an image to be repaired; when the definition is larger than a second threshold value, the definition of the original image is higher, and the original image does not need to be repaired; when the sharpness of the original image is equal to the second threshold, the original image may be determined to be an image to be repaired or the original image may be determined to be a non-image to be repaired. By comparing the definition of each original image with the second threshold value, only the original image with lower definition (lower than the second threshold value) is repaired, so that the workload of image repair is reduced, and the overall speed of image processing is increased.
Referring to fig. 4, fig. 6, and fig. 7, step 011 includes:
0111: and acquiring an original image with a portrait from the album at a preset time and/or a preset scene.
In some embodiments, first acquisition unit 111 may include first acquisition subunit 1111, wherein step 0111 may be performed by first acquisition subunit 1111; that is, the first acquiring sub-unit 1111 may be used to acquire an original image having a portrait from an album at a predetermined time and/or a preset scene.
Referring to fig. 3, in some embodiments, step 0111 may be implemented by the processor 230, that is, the processor 230 may be configured to: and acquiring an original image with a portrait from the album at a preset time and/or a preset scene.
For the original image with the portrait acquired from the album at a predetermined time, wherein the predetermined time may refer to a time when the user does not use the mobile phone, specifically, the predetermined time may include a rest time for going to sleep, for example, a night sleeping time (such as, but not limited to, a 22:00-5:00 time period), and further, for example, a noon break time (such as, but not limited to, a 12:30-2:00 time period); the predetermined time may also include a working time (such as, but not limited to, 8:00-12:00 and 14:00-18:00), when the user typically does not use the mobile phone; the predetermined time may also include a lesson time (such as, but not limited to, at least one of 8:00-8:40, 9:00-9:45, 10:00-10:45, 11:00-11:45, etc.), and the like. Because the image processing apparatus 100 or the electronic device 200 needs to occupy a certain running memory in the process of acquiring the original image with the portrait in the album, the user does not use the mobile phone generally during the rest time, the working time or the class time of sleeping, and the image processing apparatus 100 or the electronic device 200 is also in a non-working state, so that the problem of memory preemption is not caused when acquiring the original image with the portrait in the album compared with when the image processing apparatus 100 or the electronic device 200 is in a working state. The predetermined time may be one or more time periods preset by the system, and of course, the predetermined time may be set by the user according to the requirement of the user.
For obtaining an original image with a portrait from an album within a preset scene, the preset scene may include a charging state, a standby state, a low power consumption operation state, and the like. Because the image processing apparatus 100 or the electronic device 200 can take a long time to acquire the original image with the portrait in the album and occupy a certain running memory, the step of acquiring is performed in a preset scene, so that the problem of preempting the memory can be avoided as much as possible. The low power consumption operating state may refer to the electronic device 200 running only software that requires less memory to run, such as reading, watching news, etc.
It should be noted that, the capturing of the original image with the portrait from the album may be performed only at a predetermined time, may be performed only in a predetermined scene, or may be performed both at a predetermined time and in a predetermined scene. Therefore, the influence of the original image acquired in the album on the normal use of the user can be avoided to the greatest extent, and the user experience is improved.
Referring to fig. 4, fig. 8, and fig. 9, step 011 further includes:
0112: in the process of photographing by the camera 221, an original image with a portrait photographed by the camera 221 is acquired.
In some embodiments, the image processing apparatus 100 may be applied to the imaging apparatus 220, and the imaging apparatus 220 may capture an original image through the camera 221. The first acquisition unit 111 may comprise a second acquisition subunit 1112, wherein step 0112 may be performed by the second acquisition subunit 1112; that is, the second acquiring subunit 1112 may be configured to acquire, during the capturing process of the camera 221, the original image with the portrait captured by the camera 221.
Referring to fig. 3, in some embodiments, the electronic device 200 may include an imaging device 220, the imaging device 220 including a camera 221. Step 0112 may be implemented by processor 230, that is, processor 230 may be configured to: in the process of photographing by the camera 221, an original image with a portrait photographed by the camera 221 is acquired.
Specifically, when the camera 221 of the imaging device 220 works, a photographed original image with a portrait can be obtained in real time, and subsequent repair processing can be performed on the original image meeting the conditions to obtain a target image, so that the quality of the obtained image (which can be directly presented to the user) is higher when the user uses the imaging device 220 or the electronic device 200 to photograph, and the user experience is improved.
Referring to fig. 4, 10 and 11, step 012 further includes:
0121: performing a shaping low pass filter on the original image to obtain a first filtered image;
0122: acquiring first high-frequency information in an original image according to the original image and a first filtering image, wherein the first high-frequency information is a part far away from zero frequency in a discrete cosine transform coefficient, and the part is used for describing detail information of the original image; and
0123: And acquiring the definition of the original image according to the pixel number of the first high-frequency information and all the pixel numbers of the original image.
In some embodiments, the second acquisition unit 112 may include a third acquisition subunit 1121, a fourth acquisition subunit 1122, and a fifth acquisition subunit 1123, step 0121 may be performed by the third acquisition subunit 1121, step 0122 may be performed by the fourth acquisition subunit 1122, and step 0123 may be performed by the fifth acquisition subunit 1123. That is, the third acquisition subunit 1121 is operable to perform shaping low-pass filtering on the original image to acquire the first filtered image; the fourth obtaining subunit 1122 may be configured to obtain, according to the original image and the first filtered image, first high-frequency information in the original image, where the first high-frequency information is a portion of the discrete cosine transform coefficient away from zero frequency, and the portion is used to describe detailed information of the original image; the fifth acquiring subunit 1123 is operable to acquire the sharpness of the original image based on the number of pixels of the first high-frequency information and the number of all pixels of the original image.
Referring to fig. 3, in some embodiments, steps 0121, 0122, and 0123 may all be implemented by processor 230, that is, processor 230 may be configured to: performing a shaping low pass filter on the original image to obtain a first filtered image; acquiring high-frequency information in an original image according to the original image and a first filtering image, wherein the first high-frequency information is a part far away from zero frequency in a discrete cosine transform coefficient, and the part is used for describing detail information of the original image; and acquiring the definition of the original image according to the pixel number of the first high-frequency information and all the pixel numbers of the original image.
Specifically, the original image may be an original image with a portrait obtained from an album at a predetermined time and/or a preset scene; or in the process of photographing by the camera 221, an original image with a portrait photographed by the camera 221 is acquired. After an original image is obtained, shaping low-pass filtering is carried out on the original image to obtain a first filtered image, and then the first filtered image is subtracted from the original image to obtain first high-frequency information in the original image, wherein the first high-frequency information is a part, far from zero frequency, of a discrete cosine transform coefficient, and the part is used for describing detail information of the original image; after the first high-frequency information is obtained, the number of pixels of the first high-frequency information can be counted, and the more the number of pixels of the first high-frequency information is, the clearer the original image is.
The sharpness of an image may be characterized by the ratio of the number of pixels of high frequency information in the image to all pixels in the image, the higher the ratio the higher the sharpness of the image. For example, if the number of pixels of the first high frequency information in one original image is 20% of the number of all pixels of the original image, the sharpness of the original image is characterized by a 20% ratio. It can be seen that each definition corresponds to the number of pixels of one first high frequency information.
The second threshold is a critical value for measuring whether the original image needs to be repaired or not, and the second threshold is a ratio of the first preset number to the number of all pixels of the original image, wherein the number of pixels of the first high-frequency information in the original image is a first preset number. For example, in an original image, if the number of pixels of the first high-frequency information is smaller than the first preset number, it is indicated that the sharpness of the original image is smaller than the second threshold, and the original image needs to be repaired, so that the original image can be used as the image to be repaired.
The first preset number may correspond to the second threshold, and the first preset number and the second threshold are known values, which may be obtained according to a plurality of experiments and then stored in the memory element of the image processing apparatus 100 or the electronic device 200. Of course, a plurality of different first preset numbers may be preset in the image processing apparatus 100 or the electronic device 200, and the second thresholds corresponding to the first preset numbers may be automatically associated, and then the user may select the different second thresholds according to different requirements.
Taking the second threshold value as 15%, all pixels of an original image are 1600 ten thousand, the first preset number is 240 ten thousand as an example, when the number of pixels of the obtained first high-frequency information is less than 240 ten thousand, determining that the definition of the original image is less than 15%, and taking the original image as an image to be repaired.
Referring to fig. 4, 12 and 13, step 02 includes:
021: screening images containing faces from the album to serve as primary screening images;
022: screening out images with definition larger than a first threshold value from the primary screening images to serve as secondary screening images; and
023: And detecting the similarity between the face in each secondary screening image and the face in the image to be repaired, and taking the secondary screening image with the highest similarity as a reference image.
In some embodiments, screening module 12 further includes a first screening unit 121, a second screening unit 122, and a detection unit 123. Step 021 may be performed by the first screening unit 121, step 022 may be performed by the second screening unit 122, and step 023 may be performed by the detection unit 123. That is, the first filtering unit 121 may be used to filter images including faces from the album as primary filtered images; the second screening unit 122 may be configured to screen out an image with a sharpness greater than the first threshold value from the first screening image as the second screening image; the detection unit 123 may be configured to detect a similarity between a face in each of the second filtered images and a face in the image to be repaired, and use the second filtered image with the highest similarity as the reference image.
Referring to fig. 3, in some embodiments, steps 021, 022 and 023 may be implemented by the processor 230, that is, the processor 230 may further be configured to: screening images containing faces from the album to serve as primary screening images; screening out images with definition larger than a first threshold value from the primary screening images to serve as secondary screening images; and detecting the similarity between the face in each secondary screening image and the face in the image to be repaired, and taking the secondary screening image with the highest similarity as a reference image.
Specifically, since the complexion colors in the images with the faces are relatively concentrated in the color space, whether the images contain the faces can be judged by whether the complexion colors in all the images in the album are concentrated in the color space, and the images with the faces are taken as one-time screening images. Or after a template with a standard face is designed in advance, calculating the matching degree between all images in the album and the standard template, judging whether the face exists in the images or not according to whether the matching degree reaches a certain threshold value, and taking the images with the face as primary screening images. Of course, other methods may be used to detect whether a face is present in the image, which is not limited herein.
Acquiring the definition of the primary screening image (namely, the image containing the human face), comparing the definition of each primary screening image with a first threshold value, and when the definition is smaller than the first threshold value, indicating that the definition of the primary screening image is lower and the definition is fuzzy, and cannot be used for repairing the image to be repaired; when the definition is larger than the first threshold value, the definition of the primary screening image is higher, and the primary screening image can be used for repairing the image to be repaired, so that the primary image is used as the secondary screening image. When the sharpness of the primary screening image is equal to the first threshold, the primary screening image may be determined to be either a secondary screening image or a non-secondary screening image. It should be noted that the first threshold is greater than the second threshold, that is, the sharpness of the second-pass image is greater than that of the image to be repaired.
Specifically, the acquiring the sharpness of the primary screening image (i.e., the image containing the face) specifically includes: shaping and low-pass filtering the primary screening image to obtain a second filtering image, subtracting the second filtering image from the primary screening image to obtain second high-frequency information in the primary screening image, wherein the second high-frequency information is a part of the discrete cosine transform coefficient, which is far away from zero frequency and is used for describing detail information of the primary screening image; after the second high-frequency information is obtained, the number of pixels of the second high-frequency information can be counted, and the more the number of pixels of the second high-frequency information is, the clearer the one-time screening image is.
The first threshold is a threshold value for measuring whether the primary screening image can be used as a reference image or not, wherein the number of pixels of the second high-frequency information in the primary screening image is a second preset number, and the ratio of the second preset number to the number of all pixels of the primary screening image is the second threshold value. For example, in one primary screening image, if the number of pixels of the second high frequency information is smaller than the second preset number, it is indicated that the sharpness of the primary screening image is smaller than the first threshold value, and the primary screening image cannot be used as the reference image and should be excluded.
The second preset number may correspond to the first threshold, and the second preset number and the first threshold are known values, which may be obtained according to a plurality of experiments, and then stored in the memory element of the image processing apparatus 100 or the electronic device 200. Of course, a plurality of different second preset numbers may be preset in the image processing apparatus 100 or the electronic device 200, the first threshold corresponding to the second preset number may be automatically associated, and then the user may select the different first threshold according to different requirements.
Taking the first threshold value as 20%, the number of all pixels of one primary screening image is 1600 ten thousand, the second preset number is 320 ten thousand as an example for explanation, and when the number of the pixels of the acquired second high-frequency information is less than 320 ten thousand, determining that the definition of the primary screening image is less than 20%, and removing the primary screening image; when the number of pixels of the acquired second high-frequency information is greater than 320 ten thousand, determining that the definition of the primary screening image is greater than 20%, and taking the primary screening image as a secondary screening image.
It should be noted that, compared with the case of synthesizing the output image by using the blurred preview frame, the method screens the image with higher definition to repair the image to be repaired, and the obtained repaired image has higher definition, i.e. higher image quality;
The number of the obtained secondary screening images may be one or more, and when the number of the obtained secondary screening images is plural, that is, when the sharpness of a plurality of primary screening images in all the primary screening images is greater than the first threshold, step 023 is executed to detect the similarity between the face in each of the obtained secondary screening images and the face in the image to be repaired, and the secondary screening image with the highest similarity is used as the reference image.
When the obtained secondary screening image is one, that is, only one primary screening image in all primary screening images has a sharpness greater than the first threshold, step 023 is still performed in some embodiments to detect the similarity between the face in each of the secondary screening images and the face in the image to be repaired, and the secondary screening image with the highest similarity is used as the reference image. In other embodiments, step 023 is not performed, and only one secondary screening image is directly used as a reference image, so that the image processing efficiency is improved, and the image processing speed is increased.
Referring to fig. 14 and 15, step 023 includes:
0231: respectively carrying out image preprocessing on the secondary screening image and the image to be repaired;
0232: the convolution layer and the pooling layer are utilized to respectively conduct face feature extraction on the preprocessed secondary screening image and the image to be repaired so as to obtain a first feature image corresponding to the secondary screening image and a second feature image corresponding to the image to be repaired;
0233: classifying each feature in the first feature image and each feature in the second feature image by using the full connection layer, and respectively carrying out vectorization representation;
0234: calculating the differences between the feature vectors of each category in the first feature image and the feature vectors of the corresponding categories in the second feature image to obtain a plurality of differences corresponding to a plurality of categories; and
0235: And calculating the comprehensive gap between the secondary screening image and the reference image according to a plurality of gaps corresponding to a plurality of categories, and taking the secondary screening image with the smallest comprehensive gap as the reference image.
In some embodiments, the detection unit 123 further includes a first processing subunit 1231, a second processing subunit 1232, a third processing subunit 1233, a fourth processing subunit 1234, and a fifth processing subunit 1235. Step 0231 may be performed by the first processing subunit 1231, step 0232 may be performed by the second processing subunit 1232, step 0233 may be performed by the third processing subunit 1233, step 0234 may be performed by the fourth processing subunit 1234, and step 0235 may be performed by the fifth processing subunit 1235. That is, the first processing subunit 1231 may be configured to perform image preprocessing on the secondary screening image and the image to be repaired, respectively. The second processing subunit 1232 may be configured to perform face feature extraction on the preprocessed secondary screening image and the image to be repaired by using the convolution layer and the pooling layer, so as to obtain a first feature image corresponding to the secondary screening image and a second feature image corresponding to the image to be repaired. The third processing subunit 1233 may be configured to classify each feature in the first feature image and each feature in the second feature image by using the full-connection layer, and perform vectorized representation. The fourth processing subunit 1234 may be configured to calculate a gap between the feature vector of each category in the first feature image and the feature vector of the corresponding category in the second feature image to obtain a plurality of gaps corresponding to the plurality of categories. The fifth processing subunit 1235 may be configured to calculate a composite difference between the secondary screening image and the reference image from a plurality of differences corresponding to the plurality of categories, and take the secondary screening image having the smallest composite difference as the reference image.
Referring to fig. 3, in some embodiments, steps 0231, 0232, 0233, 0234, and 0235 may be executed by the processor 230, that is, the processor 230 may also be configured to: respectively carrying out image preprocessing on the secondary screening image and the image to be repaired; the convolution layer and the pooling layer are utilized to respectively conduct face feature extraction on the preprocessed secondary screening image and the image to be repaired so as to obtain a first feature image corresponding to the secondary screening image and a second feature image corresponding to the image to be repaired; classifying each feature in the first feature image and each feature in the second feature image by using the full connection layer, and respectively carrying out vectorization representation; calculating the differences between the feature vectors of each category in the first feature image and the feature vectors of the corresponding categories in the second feature image to obtain a plurality of differences corresponding to a plurality of categories; and calculating the comprehensive gap between the secondary screening image and the reference image according to a plurality of gaps corresponding to a plurality of categories, and taking the secondary screening image with the smallest comprehensive gap as the reference image.
Specifically, all the obtained secondary screening images and the images to be repaired are preprocessed respectively, and Gaussian noise is filtered out of all the obtained secondary screening images and the images to be repaired through a Gaussian filter, so that the images are smoother, and explosion points and burrs on the images are prevented from interfering with subsequent image processing. Extracting facial features of the secondary screening image and the image to be repaired obtained after pretreatment to obtain a first feature image corresponding to the secondary screening image and a second feature image corresponding to the image to be repaired; and classifying each feature in the first feature image and each feature in the second feature image, and respectively carrying out vectorization representation. Specifically, as shown in fig. 16, the secondary screening image obtained after the pretreatment is subjected to multiple convolutions and pooling to obtain multiple convolutions layers and multiple pooling layers, face features of the secondary screening image are extracted by using the convolutions layers and the pooling layers, and a first feature image corresponding to the secondary screening image is obtained; the last convolution layer executes the last convolution on the characteristic images output by the convolution layer and the pooling layer, and outputs the first characteristic image obtained by the last convolution to the full-connection layer. The full-connection layer classifies each feature in the first feature image output by the last convolution layer and represents the feature by vectorization. Likewise, the extraction process of the feature vector of the image to be restored is the same as the extraction process of the secondary screening image, and the description thereof is omitted.
And after the feature vectors in the first feature images corresponding to the secondary screening images and the feature vectors in the second feature images corresponding to the images to be repaired are obtained, calculating the difference between the feature vector of each category in each first feature image and the feature vector of the corresponding category in the second feature image. For example, selecting a feature vector representing the width of the eye in the first feature image and a feature vector representing the width of the eye in the second feature image, and calculating the difference between the two vectors; or selecting a characteristic vector representing the height of the nose girder in the first characteristic image and a characteristic vector representing the height of the nose girder in the second characteristic image, and calculating the difference between the two vectors.
And calculating the comprehensive gap between the secondary screening image and the image to be repaired according to a plurality of gaps corresponding to a plurality of categories, and taking the secondary screening image with the smallest comprehensive gap as a reference image. In some embodiments, the euclidean distance may be used to calculate the composite gap, e.g. the class of feature vectors includes eyes, nose, mouth, ears, and the feature vector representing an eye in the first feature image is a, and the feature vector representing an eye in the second feature image is A0; the characteristic vector representing the nose in the first characteristic image is B, and the characteristic vector representing the nose in the second characteristic image is B 0; the feature vector representing the mouth in the first feature image is C, and the feature vector representing the mouth in the second feature image is C 0; the first feature image represents that the feature vector of the ear is D, the feature vector representing the ear in the second feature image is D 0, and then the comprehensive difference L is calculated according to Euclidean distance to be the arithmetic square root of the sum of squares of the difference values between the feature vectors of the same category on the first feature image and the second feature image, namely the mathematical formula is used for expressing that: the smaller the calculated Euclidean distance value is, the smaller the comprehensive gap is, namely, the more similar the face on the secondary screening image is to the face on the image to be repaired, so that the secondary screening image with the minimum Euclidean distance is selected as the reference image. Of course, the cosine distance, the mahalanobis distance, or the pearson correlation coefficient may also be used to calculate the integrated gap, which is not limited herein.
It should be noted that, in some embodiments, image preprocessing may not be performed, face feature extraction may be directly performed on all the secondary screening images and the images to be repaired to obtain a first feature image corresponding to the secondary screening image and a second feature image corresponding to the images to be repaired, and then the processing steps are the same as those of the above embodiments, which is not described herein. Therefore, the overall speed of image processing can be increased, and the user experience is improved
Referring to fig. 4, 17 and 18, step 03 includes
031: Performing content generation processing on the image to be repaired through a content generation network so as to keep content characteristics in the image to be repaired;
032: extracting texture features in the reference image by using a texture generation network; and
033: And mapping the texture features to the content features in the image to be repaired to obtain the repaired image.
In some embodiments, the processing module 13 further includes a first generating unit 131, a second generating unit 132, and a repairing unit 133. Step 031 may be performed by the first generating unit 131, step 032 may be performed by the second generating unit 132, and step 033 may be performed by the repairing unit 133. That is, the first generating unit 131 may be configured to perform content generation processing on the image to be repaired through the content generating network, so as to preserve content features in the image to be repaired; the second generation unit 132 may be configured to extract texture features in the reference image using a texture generation network; the restoration unit 133 may be configured to map texture features onto content features in the image to be restored to obtain a restored image.
Referring to fig. 3, in some embodiments, steps 031, 032 and 033 are all implemented by the processor 230, that is, the processor 230 may further be configured to: performing content generation processing on the image to be repaired through a content generation network so as to keep content characteristics in the image to be repaired; extracting texture features in the reference image by using a texture generation network; and mapping the texture features to the content features in the image to be repaired to obtain the repaired image.
Specifically, content generation processing is performed on the image to be repaired through a content generation network so as to preserve content characteristics in the image to be repaired. For example, fig. 19 is a schematic diagram of obtaining content features of an image to be repaired through a content generating network, performing four convolution processes on the image to be repaired to obtain a plurality of first feature images of the image to be repaired, performing a last convolution on the first feature image output by the third convolution layer by a fourth convolution layer (i.e., a last convolution layer), outputting the feature image obtained by the last convolution to a full connection layer, obtaining feature vectors of the image to be repaired through the full connection layer, and performing four deconvolution processes on the obtained feature vectors to obtain a content image with the content features of the image to be repaired. The content image contains all the content features in the image to be repaired, such as the position of eyes, the position of eyebrows, etc. But the outline features of the facial features on the content image are blurred, such as the shape of the eye orbit, the thickness of the eyebrows, etc. The number of convolutions and deconvolutions may be any natural number of 1 or more, and for example, the number of convolutions and deconvolutions may be 3 times, 5 times, 7 times, 8 times, etc., without limitation.
And performing texture generation processing on the reference image through a texture generation network to obtain texture feature information of the reference image. For example, fig. 20 is a schematic diagram of a reference image obtaining texture features through a texture generating network, performing six convolutions on the reference image to obtain a plurality of second feature images of the image to be repaired, performing a last convolution on the second feature image output by the fifth convolution layer by the sixth convolution layer (i.e., the last convolution layer), and outputting the feature image obtained by the last convolution to a full connection layer, wherein feature vectors of the image to be repaired can be obtained through the full connection layer, and texture features of the reference image can be obtained according to the feature vectors. The texture features include contour information of the facial features of the reference image, such as the contour of the orbit, the contour of the eyebrow, and the like. The number of convolutions and deconvolutions may be any natural number of 1 or more, and for example, the number of convolutions and deconvolutions may be 3 times, 5 times, 7 times, 8 times, etc., without limitation.
Referring to fig. 21, all texture features are mapped onto the content features of the image to be repaired to obtain a repair image with clear outline features of the five sense organs. For example, texture information representing the orbital shape acquired on the reference image is mapped to the content features—the position of the eye, such that the eye contour sharpness is increased; texture information representing the shape of the eyebrows acquired on the reference image is mapped to the position of the content feature, the eyebrows, so that the definition of the contour of the eyebrows is increased. It should be noted that the content features on the repair image are all from the image to be repaired, that is, the content features not included in the image to be repaired do not appear in the repair image. For example, there is I1 on the reference image but there is no I1 on the image to be repaired, and since the content features of the restored image are all from the image to be repaired, the restored image obtained after repair does not have I1.
The high-definition reference face most similar to the face to be repaired is automatically selected from the album to serve as the reference image to judge the blurred image, and compared with a traditional image enhancement method, the reference image-based repair method can well reconstruct the outline features of the blurred facial features, can effectively improve the definition of the face image, and can enhance the definition of the outline features of the facial features.
Referring to fig. 4 and 22 together, the present application further provides a computer readable storage medium 300 having a computer program 310 stored thereon, which when executed by the processor 230, implements the steps of the image processing method according to any of the above embodiments.
For example, in the case where the program is executed by a processor, the steps of the following image processing method are implemented:
01: acquiring an image to be repaired, wherein the image to be repaired contains a human face;
02: finding out an image with the definition larger than a first threshold value and the most similar human face to the human face in the images to be repaired from the album, and taking the image as a reference image; and
03: And processing the image to be restored according to the reference image to obtain a restored image.
The computer readable storage medium 300 may be disposed in the image processing apparatus 100 or the electronic device 200, or may be disposed in a cloud server, where the image processing apparatus 100 or the electronic device 200 may communicate with the cloud server to obtain the corresponding computer program 310.
It is understood that the computer program 310 includes computer program code. The computer program code may be in the form of source code, object code, executable files, or in some intermediate form, among others. The computer readable storage medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a software distribution medium, and so forth.
Processor 230 may be referred to as a drive board. The drive board may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose Processor 230, digital signal Processor 230 (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.
Claims (10)
1. An image processing method, characterized in that the image processing method comprises:
Acquiring an image to be repaired, wherein the image to be repaired contains a human face;
finding out an image with the definition larger than a first threshold value and the most similar human face to the human face in the image to be repaired from the album, and taking the image as a reference image;
Performing content generation processing on the image to be repaired through a content generation network so as to preserve content characteristics in the image to be repaired, performing four-time convolution processing on the image to be repaired to obtain a plurality of first characteristic images of the image to be repaired, performing last convolution on the first characteristic images output by a third convolution layer by a fourth convolution layer, outputting the characteristic images obtained by the last convolution to a full connection layer, obtaining characteristic vectors of the image to be repaired through the full connection layer, and performing four-time deconvolution processing on the obtained characteristic vectors of the image to be repaired to obtain a content image with the content characteristics of the image to be repaired;
extracting texture features in the reference image by using a texture generation network, wherein the texture features comprise contour information of facial features of the reference image, performing six-time convolution processing on the reference image to obtain a plurality of second feature images of the image to be repaired, performing last convolution on the second feature images output by a fifth convolution layer by a sixth convolution layer, outputting the feature images obtained by the last convolution to a full connection layer, obtaining feature vectors of the image to be repaired by the full connection layer, and obtaining the texture features of the reference image according to the feature vectors of the image to be repaired; and
And mapping the texture features to the content features in the image to be repaired so as to obtain a repaired image.
2. The image processing method according to claim 1, wherein the acquiring the image to be repaired includes:
Acquiring an original image with a portrait;
acquiring the definition of the original image; and
And determining the original image with the definition smaller than a second threshold value as an image to be repaired, wherein the second threshold value is smaller than the first threshold value.
3. The image processing method according to claim 2, wherein the acquiring the original image having the portrait includes:
And acquiring an original image with a portrait from the album at a preset time and/or a preset scene.
4. The image processing method according to claim 2, wherein the acquiring the original image having the portrait includes:
and in the shooting process of the camera, acquiring an original image with a portrait shot by the camera.
5. The image processing method according to any one of claims 2 to 4, wherein the acquiring the sharpness of the original image includes:
performing a shaping low pass filtering on the original image to obtain a filtered image;
Acquiring high-frequency information in the original image according to the original image and the filtering image, wherein the high-frequency information is a part far from zero frequency in a discrete cosine transform coefficient and is used for describing detail information of the original image; and
And acquiring the definition of the original image according to the pixel number of the high-frequency information and all the pixel numbers of the original image.
6. The image processing method according to any one of claims 2 to 4, wherein finding an image in the album, whose face is most similar to a face in the image to be repaired and whose sharpness is greater than a preset sharpness, as a reference image includes:
screening images containing human faces from the album to serve as primary screening images;
Screening out images with definition larger than the first threshold value from the primary screening images to serve as secondary screening images; and
And detecting the similarity between the face in each secondary screening image and the face in the image to be repaired, and taking the secondary screening image with the highest similarity as the reference image.
7. The image processing method according to claim 6, wherein the detecting the similarity of the face in each of the secondary screening images and the face in the image to be restored, and the secondary screening image having the highest similarity as the reference image, comprises:
respectively carrying out image preprocessing on the secondary screening image and the image to be repaired;
Respectively extracting face features of the preprocessed secondary screening image and the image to be repaired by using a convolution layer and a pooling layer to obtain a first feature image corresponding to the secondary screening image and a second feature image corresponding to the image to be repaired;
classifying each feature in the first feature image and each feature in the second feature image by using a full connection layer respectively, and carrying out vectorization representation respectively;
calculating the differences between the feature vectors of each category in the first feature image and the feature vectors of the corresponding categories in the second feature image to obtain a plurality of differences corresponding to a plurality of categories; and
And calculating the comprehensive gap between the secondary screening image and the reference image according to a plurality of gaps corresponding to a plurality of categories, and taking the secondary screening image with the smallest comprehensive gap as the reference image.
8. An image processing apparatus, characterized in that the image processing apparatus comprises:
the device comprises an acquisition module, a restoration module and a restoration module, wherein the acquisition module is used for acquiring an image to be restored, and the image to be restored contains a human face;
The screening module is used for finding out an image with the definition larger than a first threshold value and the most similar face to the face in the image to be repaired from the album, and taking the image as a reference image; and
The processing module is used for carrying out content generation processing on the image to be repaired through a content generation network so as to keep content characteristics in the image to be repaired; extracting texture features in the reference image using a texture generation network; the texture feature is mapped to the content feature in the image to be repaired to obtain a repaired image, the texture feature comprises contour information of a face five sense organs of a reference image, four times of convolution processing is carried out on the image to be repaired to obtain a plurality of first feature images of the image to be repaired, a fourth layer of convolution layer carries out final convolution on the first feature images output by the third convolution layer, the feature images obtained by the final convolution are output to a full connection layer, feature vectors of the image to be repaired are obtained through the full connection layer, four times of deconvolution processing is carried out on the obtained feature vectors of the image to be repaired to obtain a content image with the content feature of the image to be repaired, six times of convolution processing is carried out on the reference image to obtain a plurality of second feature images of the image to be repaired, a sixth layer of convolution layer carries out final convolution on the second feature images output by the fifth convolution layer, the feature images obtained by the final convolution are output to the full connection layer, and the feature vectors of the image to be repaired are obtained according to the feature vectors of the image to be repaired.
9. An electronic device comprising a housing, an imaging device and a processor, the imaging device and the processor both mounted on the housing, the imaging device for capturing images, the processor for implementing the image processing method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the image processing method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911253780.XA CN111105369B (en) | 2019-12-09 | 2019-12-09 | Image processing method, image processing apparatus, electronic device, and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911253780.XA CN111105369B (en) | 2019-12-09 | 2019-12-09 | Image processing method, image processing apparatus, electronic device, and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111105369A CN111105369A (en) | 2020-05-05 |
CN111105369B true CN111105369B (en) | 2024-08-20 |
Family
ID=70422584
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911253780.XA Active CN111105369B (en) | 2019-12-09 | 2019-12-09 | Image processing method, image processing apparatus, electronic device, and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111105369B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112926369A (en) * | 2019-12-06 | 2021-06-08 | 中兴通讯股份有限公司 | Face image processing method and device, computer equipment and medium |
CN117729445A (en) * | 2024-02-07 | 2024-03-19 | 荣耀终端有限公司 | Image processing method, electronic device and computer readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105389780A (en) * | 2015-10-28 | 2016-03-09 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN107944399A (en) * | 2017-11-28 | 2018-04-20 | 广州大学 | A kind of pedestrian's recognition methods again based on convolutional neural networks target's center model |
CN109360170A (en) * | 2018-10-24 | 2019-02-19 | 北京工商大学 | Face restorative procedure based on advanced features |
CN109919830A (en) * | 2019-01-23 | 2019-06-21 | 复旦大学 | It is a kind of based on aesthetic evaluation band refer to human eye image repair method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8570386B2 (en) * | 2008-12-31 | 2013-10-29 | Stmicroelectronics S.R.L. | Method of merging images and relative method of generating an output image of enhanced quality |
-
2019
- 2019-12-09 CN CN201911253780.XA patent/CN111105369B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105389780A (en) * | 2015-10-28 | 2016-03-09 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN107944399A (en) * | 2017-11-28 | 2018-04-20 | 广州大学 | A kind of pedestrian's recognition methods again based on convolutional neural networks target's center model |
CN109360170A (en) * | 2018-10-24 | 2019-02-19 | 北京工商大学 | Face restorative procedure based on advanced features |
CN109919830A (en) * | 2019-01-23 | 2019-06-21 | 复旦大学 | It is a kind of based on aesthetic evaluation band refer to human eye image repair method |
Non-Patent Citations (2)
Title |
---|
郑绍华等.面向无参考的眼底成像清晰度实时评价方法.电子测量与仪器学报.2013,第 242-243页. * |
面向无参考的眼底成像清晰度实时评价方法;郑绍华等;电子测量与仪器学报;第242-243页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111105369A (en) | 2020-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017198040A1 (en) | Facial image processing apparatus, facial image processing method, and non-transitory computer-readable storage medium | |
US7933454B2 (en) | Class-based image enhancement system | |
US8194992B2 (en) | System and method for automatic enhancement of seascape images | |
CN111031239B (en) | Image processing method and apparatus, electronic device, and computer-readable storage medium | |
US20110268359A1 (en) | Foreground/Background Segmentation in Digital Images | |
CN110807759B (en) | Method and device for evaluating photo quality, electronic equipment and readable storage medium | |
Pan et al. | MIEGAN: Mobile image enhancement via a multi-module cascade neural network | |
JP2002245471A (en) | Photograph finishing service for double print accompanied by second print corrected according to subject contents | |
CN111031241B (en) | Image processing method and device, terminal and computer readable storage medium | |
CN110910331B (en) | Image processing method, image processing apparatus, electronic device, and computer-readable storage medium | |
CN110276831B (en) | Method and device for constructing three-dimensional model, equipment and computer-readable storage medium | |
CN111105368B (en) | Image processing method and apparatus, electronic device, and computer-readable storage medium | |
Moriwaki et al. | Hybrid loss for learning single-image-based HDR reconstruction | |
CN111105369B (en) | Image processing method, image processing apparatus, electronic device, and readable storage medium | |
CN112036209A (en) | Portrait photo processing method and terminal | |
Asmare et al. | Image Enhancement by Fusion in Contourlet Transform. | |
US20220398704A1 (en) | Intelligent Portrait Photography Enhancement System | |
CN114897916A (en) | Image processing method and device, nonvolatile readable storage medium and electronic equipment | |
CN111062904B (en) | Image processing method, image processing apparatus, electronic device, and readable storage medium | |
CN117496019B (en) | Image animation processing method and system for driving static image | |
Raipurkar et al. | HDR-cGAN: single LDR to HDR image translation using conditional GAN | |
CN111105370A (en) | Image processing method, image processing apparatus, electronic device, and readable storage medium | |
JP2004240622A (en) | Image processing method, image processor and image processing program | |
CN111083359B (en) | Image processing method and apparatus, electronic device, and computer-readable storage medium | |
CN110992284A (en) | Image processing method, image processing apparatus, electronic device, and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |