Nothing Special   »   [go: up one dir, main page]

CN116612015A - Model training method, image mole pattern removing method and device and electronic equipment - Google Patents

Model training method, image mole pattern removing method and device and electronic equipment Download PDF

Info

Publication number
CN116612015A
CN116612015A CN202210118889.8A CN202210118889A CN116612015A CN 116612015 A CN116612015 A CN 116612015A CN 202210118889 A CN202210118889 A CN 202210118889A CN 116612015 A CN116612015 A CN 116612015A
Authority
CN
China
Prior art keywords
moire
feature extraction
extraction layer
image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210118889.8A
Other languages
Chinese (zh)
Inventor
刘晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210118889.8A priority Critical patent/CN116612015A/en
Priority to PCT/CN2023/074325 priority patent/WO2023151511A1/en
Publication of CN116612015A publication Critical patent/CN116612015A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a model training method, an image mole pattern removing device and electronic equipment, belonging to the technical field of artificial intelligence, wherein the model training method comprises the following steps: acquiring a plurality of moire sample images and corresponding moire-free sample images; constructing a model to be trained, wherein the model to be trained is a model constructed based on a lightweight network; respectively inputting a plurality of moire sample images into a model to be trained, and acquiring a first loss according to a predicted image output by a feature extraction layer with the minimum dimension in the model to be trained and a moire-free sample image with the same dimension after downsampling; updating parameters of the feature extraction layer with the minimum scale according to the first loss until a preset training condition is met; after training of the feature extraction layer with the minimum dimension is completed, the same training process is applied to the feature extraction layer with the last dimension in the model to be trained until training is completed on the feature extraction layer with the maximum dimension, and a target model is obtained and is used for removing mole patterns in the image.

Description

Model training method, image mole pattern removing method and device and electronic equipment
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a model training method, an image moire removing device and electronic equipment.
Background
Moire is a stripe of high frequency interference which occurs on a photosensitive element in a digital camera or a scanner, etc., and is a stripe of high frequency irregularity which causes an image to appear in color. The current mole pattern removal method is mainly divided into two main categories: the method is characterized in that a traditional method is used, the moire image is processed on a YUV channel by utilizing the space and frequency characteristics of the moire, and the density and the color of different moire are greatly different due to the wide frequency distribution range of the moire, so that the removal effect of the moire by the traditional method is not robust. The other is a method using deep learning, wherein the mapping relation from the moire image to the moire-free image is learned by a network through training, and then the moire in the image is removed by using a network model obtained through training.
Compared with the traditional method, the existing deep learning method has robustness in terms of the moire removing effect, but because the network model obtained through training cannot fall to the electronic equipment end, the moire removing time is long when a user shoots an image by using the electronic equipment, and the moire removing efficiency is low.
Disclosure of Invention
The embodiment of the application aims to provide a model training method, an image mole pattern removing device and electronic equipment, which can solve the problem of low mole pattern removing efficiency in the prior art.
In a first aspect, an embodiment of the present application provides a model training method, where the method includes:
acquiring a plurality of moire sample images and corresponding moire-free sample images;
constructing a model to be trained, wherein the model to be trained is a model constructed based on a lightweight network, and the lightweight network comprises a plurality of feature extraction layers with different scales;
respectively inputting the plurality of moire sample images into the model to be trained, and acquiring a first loss according to a predicted image output by a feature extraction layer with the minimum dimension in the model to be trained and a moire-free sample image with the same dimension after downsampling; updating parameters of the feature extraction layer with the minimum scale according to the first loss until a preset training condition is met; after training of the feature extraction layer with the minimum dimension is completed, the same training process is applied to the feature extraction layer with the last dimension in the model to be trained until training is completed on the feature extraction layer with the maximum dimension, and a target model is obtained.
In a second aspect, an embodiment of the present application provides an image moire removing method for performing moire removing processing based on the object model in the first aspect, the method including:
receiving a second moire image to be processed;
under the condition that the size of the second moire image exceeds the identifiable maximum size of the target model, segmenting the second moire image into N moire sub-images, wherein each sub-image in the N moire sub-images is overlapped with the adjacent sub-images, and N is an integer larger than 1;
respectively inputting the N mole pattern sub-images into the target model for processing to obtain N mole pattern-free sub-images;
and performing stitching treatment on the N non-mole pattern sub-images, and performing pixel weighted average operation on an overlapped area in the stitching process to obtain a second non-mole pattern image corresponding to the second mole pattern image.
In a third aspect, an embodiment of the present application provides a model training apparatus, including:
the acquisition module is used for acquiring a plurality of moire sample images and corresponding moire-free sample images;
the building module is used for building a model to be trained, wherein the model to be trained is a model built based on a lightweight network, and the lightweight network comprises a plurality of feature extraction layers with different scales;
The training module is used for respectively inputting the plurality of moire sample images into the model to be trained, and acquiring a first loss according to a predicted image output by a feature extraction layer with the minimum dimension in the model to be trained and a moire-free sample image with the same dimension after downsampling; updating parameters of the feature extraction layer with the minimum scale according to the first loss until a preset training condition is met; after training the feature extraction layer with the minimum dimension is completed, the same training process is applied to the feature extraction layer with the last dimension in the model to be trained until training is completed on the feature extraction layer with the maximum dimension, and a target model is obtained.
In a fourth aspect, an embodiment of the present application provides an image moire removing device for performing moire removing processing based on the object model in the third aspect, the device including:
the receiving module is used for receiving a second moire image to be processed;
the segmentation module is used for segmenting the second moire image into N mole pattern sub-images under the condition that the size of the second mole pattern image exceeds the identifiable maximum size of the target model, wherein each sub-image in the N mole pattern sub-images has regional overlapping with the adjacent sub-images, and N is an integer larger than 1;
The first processing module is used for respectively inputting the N mole pattern sub-images into the target model for processing to obtain N mole pattern-free sub-images;
and the second processing module is used for carrying out splicing processing on the N non-mole pattern sub-images, and carrying out pixel weighted average operation on an overlapped area in the splicing process to obtain a second non-mole pattern image corresponding to the second mole pattern image.
In a fifth aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method according to the first or second aspect.
In a sixth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first or second aspect.
In a seventh aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect or the second aspect.
In an eighth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first or second aspect.
In the embodiment of the application, a plurality of moire sample images and corresponding moire-free sample images can be obtained and used as training data; constructing a lightweight mole pattern removing network as a model to be trained; training the model to be trained by using training data to obtain a target model for removing mole patterns in the input image. Compared with the prior art, in the embodiment of the application, the existing deep learning network can be compressed and quantized to obtain the lightweight network, and model training is performed based on the lightweight network, so that the calculation force of the model is reduced under the condition of not losing precision, and the effect of landing the mole pattern removing network on an electronic device end is realized, so that a user can automatically trigger the mole pattern removing function when using the electronic device to shoot an image, a high-definition image without mole patterns and capable of truly restoring a shooting picture is quickly obtained, and the mole pattern removing efficiency is improved.
In the embodiment of the application, when the target model is used for removing moire, the image to be processed with larger size can be segmented into a plurality of parts, an overlapping area exists between each part, each part is respectively input into the model for processing, the high-definition images without moire corresponding to each part are obtained, then the high-definition images of each part are spliced, the pixel-level weighted average operation is carried out on the overlapping area in every two images, the complete high-definition image without splicing lines is obtained, and the moire removing effect is good.
Drawings
FIG. 1 is a flow chart of a model training method provided by an embodiment of the present application;
FIG. 2 is a flowchart of a training data generation method according to an embodiment of the present application;
FIG. 3 is a flow chart of a lightweight network generation process provided by an embodiment of the present application;
fig. 4 is an exemplary diagram of a PyNET network provided by an embodiment of the present application;
FIG. 5 is an exemplary diagram of a lightweight network provided by an embodiment of the present application;
FIG. 6 is a flow chart of a model training process based on a lightweight network provided by an embodiment of the present application;
FIG. 7 is a flow chart of a method for removing moire from an image according to an embodiment of the present application;
FIG. 8 is a block diagram of a model training apparatus according to an embodiment of the present application;
FIG. 9 is a block diagram of an image moire reducing device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The embodiment of the application provides a model training method, an image mole pattern removing method and device and electronic equipment.
For ease of understanding, some concepts involved in embodiments of the application are first described below.
Model compression: the trained depth model is simplified, so that a network with light weight and equivalent accuracy is obtained, the compressed network has smaller structure and fewer parameters, the calculation and storage cost can be effectively reduced, and the network is convenient to deploy in a limited hardware environment.
AF (Automatic Focus): the photo is typically automatically adjusted to the clearest state. The camera can utilize the electronic range finder to automatically adjust, lock target distance and action, and the electronic range finder is with the camera lens control of back and forth movement in corresponding position, need aim at the shooting object with it when shooing, perhaps because focusing is unclear, causes the condition of film blurring yet.
Lens distortion: in fact, is a generic term for the inherent perspective distortion of an optical lens, including pincushion distortion, barrel distortion, linear distortion, and the like.
FLOPS (Floating-point operation performed per second-point Operations Per Second): is often used to estimate the computational power of a deep-learning model, the larger the value the more computationally intensive the model requires.
The method provided by the embodiment of the application is described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a flowchart of a model training method according to an embodiment of the present application, as shown in fig. 1, the method may include the following steps: step 101, step 102 and step 103, wherein,
in step 101, a plurality of moire sample images and corresponding moire-free sample images are acquired.
In the embodiment of the application, a plurality of moire sample images and corresponding moire-free sample images are used as training data.
In step 102, a model to be trained is constructed, wherein the model to be trained is a model constructed based on a lightweight network, and the lightweight network comprises a plurality of feature extraction layers with different scales.
In the embodiment of the application, the feature extraction layers with different scales are used for extracting the features with different scales of the input image. The existing deep learning network can be compressed and quantized to obtain a lightweight network.
In some embodiments, the generation process of the lightweight network may include: obtaining a PyNET network, deleting a feature extraction layer with a specific scale in the PyNET network, reducing the number of convolution kernel channels of the reserved feature extraction layer to a preset value, and modifying an activation function and a normalization function in the reserved feature extraction layer to obtain a lightweight network, wherein the feature extraction layer with the specific scale is used for extracting features with the specific scale of an input image.
In step 103, a plurality of moire sample images are respectively input into a model to be trained, and a first loss is obtained according to a predicted image output by a feature extraction layer with the minimum dimension in the model to be trained and a moire-free sample image with the same dimension after downsampling; updating parameters of the feature extraction layer with the minimum scale according to the first loss until a preset training condition is met; after training of the feature extraction layer with the minimum dimension is completed, the same training process is applied to the feature extraction layer with the last dimension in the model to be trained until training is completed on the feature extraction layer with the maximum dimension, and a target model is obtained.
In the embodiment of the application, when model training is performed, the training is sequentially performed from the feature extraction layer with the minimum dimension, and after the feature extraction layer with the minimum dimension is pre-trained, the same process is applied to the feature extraction layer with the adjacent last dimension until the training is completed on the feature extraction layer with the maximum dimension.
According to the embodiment, the existing deep learning network can be compressed and quantized to obtain the lightweight network, and model training is performed based on the lightweight network, so that the calculation force of a model is reduced under the condition that accuracy is not lost, and accordingly the anti-moire network is landed on an electronic device end, a user can automatically trigger an anti-moire function when using the electronic device to shoot an image, a high-definition image without moire and capable of truly restoring a shooting picture is obtained rapidly, and the anti-moire efficiency is improved.
In the prior art, a screenshot is used as a sample image without moire, and a screenshot shot by a mobile phone is used as a sample image with moire. However, when the screen capture is used as a target image for training, the network model cannot learn the illumination information of the original image, so that the moire removing effect of the network model obtained by training is poor. In order to solve the above problems, as shown in fig. 2, fig. 2 is a flowchart of a training data generating method according to an embodiment of the present application, including the following steps: step 201, step 202 and step 203,
In step 201, a screenshot from a display device is acquired.
In the embodiment of the application, the screen capture is to perform a screen capture operation on an image displayed on a screen of a display device to obtain the image. The screen shot is a high definition image without moire.
In step 202, in a focusing state of the camera, a white image displayed on the display device is photographed to obtain a first moire image, and a moire sample image is generated according to the screenshot, the white image and the first moire image.
In the embodiment of the application, the white image is a pure white background image, wherein the pixel value of each pixel point is 255. The display device may be a computer. Considering that the moire pattern is mainly the result of the combined action of the frequency of the display device screen and the frequency of the camera of the photographing device and is basically irrelevant to the picture displayed by the display device screen, in the embodiment of the application, the pure white background image is firstly used as a material for moire shooting, so as to obtain a first moire image.
Considering that moire shot by a camera can be regarded as complex additive noise, the noise is related to shooting angles and lens parameters and is irrelevant to a background image displayed in a screen of display equipment, therefore, in the embodiment of the application, the first moire image and the screenshot can be modeled, and a moire sample image is synthesized.
Accordingly, in some embodiments, the step 202 may specifically include the following steps (not shown in the figures): step 2021, step 2022, and step 2023, wherein,
in step 2021, the RGB values I for each pixel in the screenshot are obtained bg RGB value I of each pixel point in white image 0 And RGB value I of each pixel point in the first moire image moire1
In step 2022, according to I 0 And I moire1 Calculate Moire noise I moire-feature . Wherein, can be according to formula I moire1 =I moire-feature +I 0 Calculate I moire-feature ,I moire-feature =I moire1 -I 0
In step 2023, according to I moire-feature And I bg Calculating RGB value I of each pixel point in the Moire sample image moire2 According to I moire2 A moire sample image is generated. Wherein, can be according to formula I moire2 =I moire-feature +I bg Calculate I moire2 ,I moire2 =I moire1 -I 0 +I bg
In step 203, in the out-of-focus state of the camera, a white image displayed on the display device is photographed to obtain a first moire-free image, and a moire-free sample image corresponding to the moire-free sample image is generated according to the screenshot, the white image and the first moire-free image.
In the embodiment of the application, the position of the camera of the shooting equipment is kept unchanged, the AF of the camera is regulated, so that the camera is out of focus, and as the out-of-focus state is free of mole patterns, a first mole pattern-free image without mole patterns, the illumination and the shadow of which are basically consistent with those of the first mole pattern image, can be obtained. The first moire-free image and the screen shot are modeled, and a moire-free sample image is synthesized.
In some embodiments, the step 203 may specifically include the following steps (not shown in the figure): step 2031, step 2032, and step 2033, wherein,
in step 2031, RGB values I of each pixel point in the first moire-free image are obtained clean1
In step 2032, according to I clean1 And I 0 Calculate Moire noise free I clean-feature . Wherein, can be according to formula I clean1 =I clean-feature +I 0 Calculate I clean-feature ,I clean-feature =I clean1 -I 0
In step 2033, according to I clean-feature And I bg Calculating RGB value I of each pixel point in the moire-free sample image clean2 According to I clean2 And generating a moire-free sample image. Wherein, can be according to formula I clean2 =I clean-feature +I bg Calculate I clean2 ,I clean2 =I clean1 -I 0 +I bg
In the embodiment of the application, the moire in the synthesized image is truly existing, is very close to a real scene, and can not be used for solving the problems of good training set effect and poor real test effect.
As can be seen from the above embodiment, in this embodiment, the moire shooting can be performed using the pure white background image as a material, and the shot pure white background moire image and the screenshot are modeled to synthesize a moire sample image. And then keeping the held position of the camera unchanged, enabling the camera to lose focus, and continuously shooting the pure white background image, wherein the defocusing state is free of mole patterns, so that a pure white background mole pattern-free image which does not have mole patterns and is basically consistent with illumination and shadow of a pure Bai Dema mole pattern image can be shot, modeling is carried out on the shot pure white background mole pattern-free image and a screen shot, and a mole pattern-free sample image is synthesized, so that the synthesized mole pattern sample image and the mole pattern-free sample image both retain illumination information of an original image. And finally, taking the synthesized moire sample image and the moire-free sample image as training data for subsequent model training. Compared with the prior art, in the embodiment of the application, because the synthesized moire sample image and the moire-free sample image both retain the illumination information of the original image, the illumination information of the image also retained in the network model obtained by training can truly restore the original image color when the network model is used for moire removal processing, and the image after moire removal can restore the real state seen by eyes during shooting, so that the image accords with the cognition observed by eyes and the moire removal effect is more natural.
In still another embodiment provided by the present application, when the lightweight network is a network obtained by modifying a PyNET network, the generating process of the lightweight network may include the following steps, based on the embodiment shown in fig. 1, as shown in fig. 3: step 301, step 302 and step 303, wherein,
in step 301, a PyNET network is obtained, where the PyNET network includes: the input layer, the first feature extraction layer, the second feature extraction layer, the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer are respectively used for extracting 5 features with different scales of an input image; the scale of the features extracted by the ith feature extraction layer is larger than that of the features extracted by the (i+1) th feature extraction layer, and i is more than or equal to 1 and less than or equal to 5.
In the embodiment of the present application, the existing deep learning network may be modified to obtain a lightweight network, for example, a PyNET network in the paper named "Replacing Mobile Camera ISP with a Single Deep Learning Model" is used, as shown in fig. 4. 5 Level layers in fig. 4: the Level1 layer, the Level2 layer, the Level3 layer, the Level4 layer, and the Level5 layer correspond to the first feature extraction layer, the second feature extraction layer, the third feature extraction layer, the fourth feature extraction layer, and the fifth feature extraction layer, respectively. The calculated amount of the Level1 layer is the largest, and the calculated amount of the Level5 layer is the smallest.
In step 302, deleting the first feature extraction layer and the second feature extraction layer in the PyNET network, reserving the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer, adjusting the number of convolution kernel channels of the third feature extraction layer from the first value to the second value, adjusting the number of convolution kernel channels of the fourth feature extraction layer from the third value to the fourth value, and adjusting the number of convolution kernel channels of the fifth feature extraction layer from the fifth value to the sixth value; the first value is greater than the second value, the third value is greater than the fourth value, and the fifth value is greater than the sixth value.
In the embodiment of the application, the Level1 layer and the Level2 layer of the PyNET network are removed, and only the Level3 layer, the Level4 layer and the Level5 layer are reserved. After the transformation, as shown in fig. 5, the network structure is changed from a five-layer pyramid to a three-layer pyramid. Assuming that the input size of the network is 512×512, the pynet network sends the input image to the Level5 layer after 4 times of downsampling, and the size of the feature map output by the Level5 is 32×32. After the Level5 layer feature is obtained, up-sampling is carried out, and the input feature of the Level4 layer (the size of the feature map which is sent to the Level4 layer after 3 times of down-sampling is 64 x 64) is connected. After the features of the Level4 layer are obtained, the input features of the Level3 layer (the feature map size reaching the Level3 after 2 times of downsampling is 128×128) are connected through upsampling. Similarly, the output 512 of Level1 layer is obtained. And the structure of the light-weight network after transformation only comprises 3 Level layers. The network input is sent to the Level5 layer through 2 downsampling, and the size of the output characteristic diagram is 128×128. After the Level5 layer features are acquired, up sampling is carried out, and input features of the Level4 layer (the feature map size reaching the Level4 after 1 down sampling is 256×256) are connected. After the Level4 layer features are acquired, up-sampling is performed, and connection is performed with input features (feature map size is 512 x 512) of the Level3 layer. And finally, outputting the Level3 layer, namely a final network model predicted anti-moire image.
In the embodiment of the application, the number of convolution kernel channels adopted by the Level5 layer, the Level4 layer and the Level3 layer in the PyNET network is 512, 256 and 128 respectively. In the calculation of the convolution layer, assuming that the input is H x W x C and C is the depth of the input (i.e., the number of channels), the number of channels of the convolution kernel (i.e., the filter) needs to be the same as the number of channels of the input, and therefore is also C. Assuming that the convolution kernel is of size kxk, the dimension of one convolution kernel is kxkxc, and thus one convolution kernel is calculated with the input to obtain one channel of the output. Assuming a convolution kernel of P kxkxc, there are P channels output. The improved lightweight network reduces the number of convolution kernel channels of the Level5 layer, the Level4 layer and the Level3 layer to 128, 64 and 32 respectively. Because the number of channels of the convolution kernel is reduced, the dimension of the convolution kernel is reduced, so that the calculated amount is greatly reduced when the convolution kernel and the input are subjected to matrix multiplication each time, and the number of channels of the output is reduced. In convolutional neural networks, the output of the previous layer is the input of the next layer, so this operation can cause the calculation of each subsequent layer to drop exponentially.
In step 303, deleting the first normalization functions in the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer, adding the second normalization functions in the input layer, and changing the activation functions in the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer to hyperbolic tangent functions to obtain a lightweight network; the second normalization function is used to normalize the pixel values of the input image from a range of (0, 255) to a range of (-1, 1).
In the embodiment of the application, the images which are inferred by the PyNET network by patch are spliced with serious chromatic aberration. The analyzed network structure finds that the normalization is caused. The single sample that was considered when computing the normalized statistics, i.e., only the image information of a single patch, was considered, whereas the global image information was ignored. Therefore, in the structure of the reconstructed lightweight network, the original normalization function is removed, and training is performed by changing the normalization method at the input layer, so that the problem of chromatic aberration of the split can be solved. The specific normalization is to normalize the input image from a range of (0, 255) to (-1, 1). The activation function in the PyNET network is sigmoid, the range of the value range is (0, 1), and the value range is inconsistent with the input range, so that the activation function is modified into a hyperbolic tangent function.
In the case of the embodiment shown in fig. 3, as shown in fig. 6, the training process of the object model may specifically include the following steps: step 601, step 602 and step 603, wherein,
in step 601, a plurality of moire sample images are respectively input into a model to be trained, a first loss is obtained according to a predicted image output by a fifth feature extraction layer in the model to be trained and a non-moire sample image which is subjected to 4 times downsampling, and parameters of the fifth feature extraction layer are updated according to the first loss until convergence is achieved, so that a first intermediate model is obtained; wherein the first loss is used for indicating the difference between the predicted image output by the fifth feature extraction layer and the sample image without mole patterns after being downsampled by 4 times.
In step 602, a plurality of moire sample images are respectively input into a first intermediate model, and a second loss is obtained according to a predicted image output by a fourth feature extraction layer in the first intermediate model and a moire-free sample image subjected to downsampling by 2 times, and parameters of the first intermediate model are updated according to the second loss until convergence is achieved, so that a second intermediate model is obtained; wherein the second loss is used for indicating the difference between the predicted image output by the fourth feature extraction layer and the 2-time downsampled moire-free sample image.
In step 603, a plurality of moire sample images are respectively input into a second intermediate model, a third loss is obtained according to a predicted image output by a third feature extraction layer in the second intermediate model and a corresponding moire-free sample image, and model parameters of the second intermediate model are updated according to the third loss until convergence is achieved, so that a target model is obtained; wherein the third loss is used for indicating the difference between the predicted image output by the third feature extraction layer and the corresponding moire-free sample image.
Firstly, training a Level5 layer, inputting a moire sample image into a model to be trained, acquiring a first loss according to the image output by the Level5 layer and the clean image after downsampling by 4 times, and updating model parameters of the model to be trained according to the first loss until model training conditions are met. After the training of the Level5 layer is finished, the model parameters of the Level5 layer are imported, and then the Level4 layer is trained. And acquiring a second loss according to the image output by the Level4 layer and the clean image after downsampling by 2 times. And by analogy, after the training of the Level4 layer is finished, model parameters are imported, the Level3 layer is trained, and finally, a predicted image with the same resolution as the input size is obtained.
In the embodiment of the application, a model compression method is utilized, the original mole pattern removing large model is modified in a network structure under the condition of not losing precision, the calculation force of the model is reduced, for example, the calculation force of a PyNET network when 512 is input is 1695GFLOPS, and the calculation force of the modified lightweight network when 512 is input is reduced to 51.6GFLOPS.
Therefore, in the embodiment of the application, in order to land the moire removing network on the electronic equipment end, a user can automatically trigger the moire removing function when using a camera to shoot an electronic screen, a high-definition image which does not have moire and truly restores a shooting picture is obtained, and the original PyNET network can be compressed and quantized, so that the calculation force of a model is greatly reduced under the condition of not losing precision.
According to the model training method provided by the embodiment of the application, the execution subject can be a model training device. In the embodiment of the application, a model training device is taken as an example to execute a model training method, and the model training device provided by the embodiment of the application is described.
Fig. 7 is a flowchart of an image moire removing method according to an embodiment of the present application, and the moire removing process is performed based on the target model trained in any of the above embodiments, as shown in fig. 7, and the method may include the following steps: step 701, step 702, step 703 and step 704, wherein,
In step 701, a second moire image to be processed is received.
In one example, the user opens a camera application and the camera preview interface opens. The system acquires YUV image data previewed by the camera and transmits the YUV image data to the main body detection module. The main body detection module judges whether the YUV image contains moire or not. The specific method comprises the following steps: the input image is processed by adopting the existing image classification algorithm, and the output information comprises: whether moire exists in the image. If the input image does not contain the mole patterns, directly skipping to a preview interface; if the input image contains Moire, a Moire removal algorithm is invoked. The method can detect the mole marks of the camera preview image by means of an image classification algorithm, can automatically remove the mole marks without any manual adjustment of a user, and has no abrupt feeling.
In step 702, in a case that the size of the second moire image exceeds the maximum size identifiable by the target model, segmenting the second moire image into N moire sub-images; wherein, each sub-image in the N mol line sub-images has region overlapping with the adjacent sub-image, N is an integer greater than 1.
In step 703, the N mole pattern sub-images are respectively input into the target model for processing, so as to obtain N mole pattern-free sub-images.
In the embodiment of the application, because the memory of the mobile phone end is limited, the input image cannot be directly processed if the size is larger, so that the image is required to be divided into a plurality of patches and then sent to a network for prediction.
In step 704, stitching is performed on the N moire-free sub-images, and a pixel weighted average operation is performed on the overlapping region in the stitching process, so as to obtain a second moire-free image corresponding to the second moire image.
In the embodiment of the present application, in order to eliminate the stitching line, the input image cannot be equally divided, for example, an input image with a size of 3000×3000 is slid according to a window with a size of 1120×1120, and the step length is set to 940, so that 9 patches with a size of 1120×1120 can be obtained, and an overlapping area exists between each patch, and the overlapping area is weighted by weighted average pixels, so that the stitching line can be eliminated.
As can be seen from the foregoing embodiments, in this embodiment, when the target model is used to perform the moire removing process, for a to-be-processed image with a larger size, the to-be-processed image may be divided into a plurality of portions, where an overlapping area exists between each portion, each portion is respectively input into the model to be processed, so as to obtain a high-definition image without moire corresponding to each portion, then the high-definition images of each portion are spliced, and a pixel-level weighted average operation is performed on the overlapping areas in two images, so as to obtain a complete high-definition image without a splicing line, where the moire removing effect is better.
According to the image moire removing method provided by the embodiment of the application, the execution subject can be an image moire removing device. In the embodiment of the application, an image moire removing method is taken as an example of an image moire removing device, and the image moire removing device provided by the embodiment of the application is described.
Therefore, the embodiment of the application carries out training by a method of synthesizing data, so that the original image color can be truly restored during model prediction, and the calculation force of the model is greatly reduced by utilizing a model compression method under the condition of not losing precision. The moire detection is carried out on the camera preview image by means of an image classification algorithm, so that the moire can be automatically removed without any manual adjustment of a user, and no abrupt sense is caused. The image with the Moire removed can restore the real state seen by human eyes during shooting, so that the photo accords with the cognition observed by the human eyes. The lightweight mole pattern removal model has a comparison effect before and after mole pattern removal after training by using synthetic data.
Fig. 8 is a block diagram of a model training apparatus according to an embodiment of the present application, and as shown in fig. 8, a model training apparatus 800 may include: acquisition module 801, construction module 802 and training module 803,
An acquiring module 801, configured to acquire a plurality of moire sample images and corresponding moire-free sample images;
a building module 802, configured to build a model to be trained, where the model to be trained is a model built based on a lightweight network, and the lightweight network includes a plurality of feature extraction layers with different scales;
the training module 803 is configured to input the plurality of moire sample images to the model to be trained, and obtain a first loss according to a predicted image output by a feature extraction layer with a minimum dimension in the model to be trained and a moire-free sample image with the same dimension after downsampling; updating parameters of the feature extraction layer with the minimum scale according to the first loss until a preset training condition is met; after training the feature extraction layer with the minimum dimension is completed, the same training process is applied to the feature extraction layer with the last dimension in the model to be trained until training is completed on the feature extraction layer with the maximum dimension, and a target model is obtained.
According to the embodiment, the existing deep learning network can be compressed and quantized to obtain the lightweight network, and model training is performed based on the lightweight network, so that the calculation force of a model is reduced under the condition that accuracy is not lost, and accordingly the anti-moire network is landed on an electronic device end, a user can automatically trigger an anti-moire function when using the electronic device to shoot an image, a high-definition image without moire and capable of truly restoring a shooting picture is obtained rapidly, and the anti-moire efficiency is improved.
Optionally, as an embodiment, the model training apparatus 800 may further include:
the generation module is used for acquiring the PyNET network, deleting a characteristic extraction layer with a specific scale in the PyNET network, reducing the number of convolution kernel channels of the reserved characteristic extraction layer to a preset value, and modifying an activation function and a normalization function in the reserved characteristic extraction layer to obtain a lightweight network, wherein the characteristic extraction layer with the specific scale is used for extracting the characteristic with the specific scale of an input image.
Alternatively, as an embodiment, the generating module may include:
the first obtaining submodule is configured to obtain a PyNET network, where the PyNET network includes: the input layer, the first feature extraction layer, the second feature extraction layer, the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer are respectively used for extracting features with 5 different scales of an input image, the scale of the features extracted by the ith feature extraction layer is larger than that of the features extracted by the (i+1) th feature extraction layer, and i is more than or equal to 1 and less than or equal to 5;
a first modification submodule, configured to delete the first feature extraction layer and the second feature extraction layer in the PyNET network, reserve the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer, adjust the number of convolution kernel channels of the third feature extraction layer from a first value to a second value, adjust the number of convolution kernel channels of the fourth feature extraction layer from a third value to a fourth value, and adjust the number of convolution kernel channels of the fifth feature extraction layer from a fifth value to a sixth value, where the first value is greater than the second value, the third value is greater than the fourth value, and the fifth value is greater than the sixth value;
And a second modification submodule, configured to delete a first normalization function in the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer, add a second normalization function in the input layer, and change an activation function in the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer to a hyperbolic tangent function, so as to obtain a lightweight network, where the second normalization function is used to normalize a pixel value of an input image from a range of (0, 255) to a range of (-1, 1).
Alternatively, as an embodiment, the training module 803 may include:
the first training submodule is used for respectively inputting the plurality of moire sample images into the model to be trained, acquiring first loss according to the predicted image output by the fifth feature extraction layer in the model to be trained and the moire-free sample image after 4 times of downsampling, updating parameters of the fifth feature extraction layer according to the first loss until convergence to obtain a first intermediate model, wherein the first loss is used for indicating the difference between the predicted image output by the fifth feature extraction layer and the moire-free sample image after 4 times of downsampling;
The second training submodule is used for respectively inputting the plurality of moire sample images into the first intermediate model, acquiring second loss according to the predicted image output by the fourth feature extraction layer in the first intermediate model and the 2-time downsampled moire-free sample image, and updating parameters of the first intermediate model according to the second loss until convergence to obtain a second intermediate model, wherein the second loss is used for indicating the difference between the predicted image output by the fourth feature extraction layer and the 2-time downsampled moire-free sample image;
and the third training sub-module is used for respectively inputting the plurality of moire sample images into the second intermediate model, acquiring a third loss according to the predicted image output by the third feature extraction layer in the second intermediate model and the corresponding moire-free sample image, updating the model parameters of the second intermediate model according to the third loss until convergence to obtain a target model, wherein the third loss is used for indicating the difference between the predicted image output by the third feature extraction layer and the corresponding moire-free sample image.
Optionally, as an embodiment, the obtaining module 801 may include:
a second acquisition sub-module for acquiring a screenshot from the display device;
the first generation sub-module is used for shooting a white image displayed on the display device in a camera focusing state to obtain a first moire image, and generating a moire sample image according to the screen shot, the white image and the first moire image;
the second generation submodule is used for shooting the white image displayed on the display device in the camera defocusing state to obtain a first moire-free image, and generating a moire-free sample image corresponding to the moire sample image according to the screenshot, the white image and the first moire-free image.
Optionally, as an embodiment, the first generating sub-module may include:
a first obtaining unit, configured to obtain RGB values I of each pixel point in the screenshot bg RGB value I of each pixel point in the white image 0 And RGB value I of each pixel point in the first moire image moire1
A first calculation unit for calculating the first calculation result according to the I 0 And I moire1 Calculate Moire noise I moire-feature
A first generation unit for generating a first output signal according to the I moire-feature And I bg Calculating RGB value I of each pixel point in the Moire sample image moire2 According to the I moire2 Generating the moire sample image;
the second generating sub-module may include:
a second obtaining unit, configured to obtain RGB values I of each pixel point in the first moire-free image clean1
A second calculation unit for calculating the I clean1 And I 0 Calculate Moire noise free I clean-feature
A second generation unit for generating a second output signal according to the I clean-feature And I bg Calculating RGB value I of each pixel point in the moire-free sample image corresponding to the moire-free sample image clean2 According to the I clean2 And generating the moire-free sample image.
Fig. 9 is a block diagram of an image moire removing device according to an embodiment of the present application, and as shown in fig. 9, the image moire removing device 900 may include: a receiving module 901, a slicing module 902, a first processing module 903 and a second processing module 904, wherein,
a receiving module 901, configured to receive a second moire image to be processed;
a segmentation module 902, configured to segment the second moire image into N moire sub-images when the size of the second moire image exceeds the maximum size identifiable by the target model, where each sub-image of the N moire sub-images has a region overlapping with an adjacent sub-image, and N is an integer greater than 1;
The first processing module 903 is configured to input the N mole pattern sub-images into the target model respectively for processing, to obtain N mole pattern sub-images;
and the second processing module 904 is configured to perform stitching processing on the N non-moire sub-images, and perform pixel weighted average operation on an overlapping area in the stitching process to obtain a second non-moire image corresponding to the second moire image.
As can be seen from the foregoing embodiments, in this embodiment, when the target model is used to perform the moire removing process, for a to-be-processed image with a larger size, the to-be-processed image may be divided into a plurality of portions, where an overlapping area exists between each portion, each portion is respectively input into the model to be processed, so as to obtain a high-definition image without moire corresponding to each portion, then the high-definition images of each portion are spliced, and a pixel-level weighted average operation is performed on the overlapping areas in two images, so as to obtain a complete high-definition image without a splicing line, where the moire removing effect is better.
The model training device and the image moire-removing device in the embodiment of the application can be electronic equipment, and can also be a component in the electronic equipment, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook, or personal digital assistant (personal digital assistant, PDA), etc., and may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine, self-service machine, etc., without limitation of the embodiments of the present application.
The model training device and the image moire removing device in the embodiment of the application can be devices with an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The model training device and the image mole pattern removing device provided by the embodiment of the application can realize the processes realized by the embodiment of the model training method and the embodiment of the image mole pattern removing method, and are not repeated here for avoiding repetition.
Optionally, as shown in fig. 10, the embodiment of the present application further provides an electronic device 1000, including a processor 1001 and a memory 1002, where the memory 1002 stores a program or an instruction that can be executed on the processor 1001, and the program or the instruction implements each step of the foregoing embodiment of the model training method or the image mole pattern removing method when executed by the processor 1001, and the steps achieve the same technical effects, so that repetition is avoided and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the application. The electronic device 1100 includes, but is not limited to: radio frequency unit 1101, network module 1102, audio output unit 1103, input unit 1104, sensor 1105, display unit 1106, user input unit 1107, interface unit 1108, memory 1109, and processor 1110.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1110 by a power management system, such as to perform functions such as managing charging, discharging, and power consumption by the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than illustrated, or may combine some components, or may be arranged in different components, which are not described in detail herein.
In one embodiment provided by the present application, when the electronic device performs the model training method in the embodiment shown in fig. 1, the processor 1110 is configured to obtain a plurality of moire sample images and corresponding moire-free sample images; constructing a model to be trained, wherein the model to be trained is a model constructed based on a lightweight network, and the lightweight network comprises a plurality of feature extraction layers with different scales; respectively inputting a plurality of moire sample images into a model to be trained, and acquiring a first loss according to a predicted image output by a feature extraction layer with the minimum dimension in the model to be trained and a moire-free sample image with the same dimension after downsampling; updating parameters of the feature extraction layer with the minimum scale according to the first loss until a preset training condition is met; after training of the feature extraction layer with the minimum dimension is completed, the same training process is applied to the feature extraction layer with the last dimension in the model to be trained until training is completed on the feature extraction layer with the maximum dimension, and a target model is obtained.
Therefore, in the embodiment of the application, the existing deep learning network can be compressed and quantized to obtain the lightweight network, and model training is performed based on the lightweight network, so that the calculation force of the model is reduced under the condition of not losing precision, and the effect of landing the mole pattern removing network on an electronic device end is realized, so that a user can automatically trigger the mole pattern removing function when using the electronic device to shoot an image, a high-definition image without mole patterns and capable of truly recovering a shooting picture is quickly obtained, and the mole pattern removing efficiency is improved.
Optionally, as an embodiment, the processor 1110 is further configured to obtain a PyNET network, delete a feature extraction layer with a specific scale in the PyNET network, reduce the number of convolution kernel channels of the feature extraction layer to a preset value, and modify an activation function and a normalization function in the feature extraction layer to obtain a lightweight network, where the feature extraction layer with the specific scale is used to extract features with the specific scale of the input image.
Optionally, as an embodiment, the processor 1110 is further configured to obtain a PyNET network, where the PyNET network includes: the input layer, the first feature extraction layer, the second feature extraction layer, the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer are respectively used for extracting features with 5 different scales of an input image, the scale of the features extracted by the ith feature extraction layer is larger than that of the features extracted by the (i+1) th feature extraction layer, and i is more than or equal to 1 and less than or equal to 5; deleting a first feature extraction layer and a second feature extraction layer in a PyNET network, reserving a third feature extraction layer, a fourth feature extraction layer and a fifth feature extraction layer, adjusting the number of convolution kernel channels of the third feature extraction layer from a first value to a second value, adjusting the number of convolution kernel channels of the fourth feature extraction layer from the third value to a fourth value, and adjusting the number of convolution kernel channels of the fifth feature extraction layer from the fifth value to a sixth value, wherein the first value is larger than the second value, the third value is larger than the fourth value, and the fifth value is larger than the sixth value; deleting the first normalization function in the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer, adding a second normalization function in the input layer, and changing the activation functions in the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer to hyperbolic tangent functions to obtain a lightweight network, wherein the second normalization function is used for normalizing the pixel value of the input image from the range of (0, 255) to the range of (-1, 1).
Optionally, as an embodiment, the processor 1110 is further configured to input a plurality of moire sample images to a model to be trained, obtain a first loss according to a predicted image output by a fifth feature extraction layer in the model to be trained and a non-moire sample image after downsampling by 4 times, update parameters of the fifth feature extraction layer according to the first loss until convergence, and obtain a first intermediate model; respectively inputting a plurality of moire sample images into a first intermediate model, acquiring second loss according to a predicted image output by a fourth feature extraction layer in the first intermediate model and a moire-free sample image subjected to downsampling by 2 times, and updating parameters of the first intermediate model according to the second loss until convergence to obtain a second intermediate model; and respectively inputting the plurality of moire sample images into the second intermediate model, acquiring a third loss according to the predicted image output by a third feature extraction layer in the second intermediate model and the corresponding moire-free sample image, and updating model parameters of the second intermediate model according to the third loss until convergence to obtain a target model.
Optionally, as an embodiment, the processor 1110 is further configured to obtain a screenshot from a display device; in a focusing state of a camera, shooting a white image displayed on a display device to obtain a first moire image, and generating a moire sample image according to a screen shot, the white image and the first moire image; and in the out-of-focus state of the camera, shooting a white image displayed on the display device to obtain a first moire-free image, and generating a moire-free sample image according to the screen shot, the white image and the first moire-free image.
Optionally, as an embodiment, the processor 1110 is further configured to obtain RGB values I of each pixel point in the screenshot bg RGB value I of each pixel point in white image 0 And RGB value I of each pixel point in the first moire image moire1 The method comprises the steps of carrying out a first treatment on the surface of the According to I 0 And I moire1 Calculation of Moire noise I moire-feature The method comprises the steps of carrying out a first treatment on the surface of the According to I moire-feature And I bg Calculating RGB value I of each pixel point in the Moire sample image moire2 According to I moire2 Generating a Moire sample image; obtaining RGB value I of each pixel point in the first moire-free image clean1 The method comprises the steps of carrying out a first treatment on the surface of the According to I clean1 And I 0 Calculate no moire noise I clean-feature The method comprises the steps of carrying out a first treatment on the surface of the According to I clean-feature And I bg Calculating RGB value I of each pixel point in the non-moire sample image corresponding to the moire sample image clean2 According to I clean2 And generating a moire-free sample image.
In another embodiment provided by the present application, when the electronic device performs the image moire removing method in the embodiment shown in fig. 7, the processor 1110 is configured to receive a second moire image to be processed; under the condition that the size of the second moire image exceeds the maximum size identifiable by the target model, segmenting the second moire image into N moire sub-images, wherein each sub-image in the N moire sub-images is overlapped with the adjacent sub-images, and N is an integer larger than 1; respectively inputting the N mole pattern sub-images into a target model for processing to obtain N mole pattern-free sub-images; and performing stitching treatment on the N non-moire sub-images, and performing pixel weighted average operation on the overlapped area in the stitching process to obtain a second non-moire image corresponding to the second moire image.
Therefore, in the embodiment of the application, when the target model is used for removing moire, the image to be processed with larger size can be segmented into a plurality of parts, an overlapping area exists between each part, each part is respectively input into the model for processing, the high-definition images without moire corresponding to each part are obtained, then the high-definition images of each part are spliced, and pixel-level weighted average operation is carried out on the overlapping area in every two images, so that the complete high-definition image without splicing lines is finally obtained, and the moire removing effect is good.
It should be appreciated that in embodiments of the present application, the input unit 1104 may include a graphics processor (Graphics Processing Unit, GPU) 11041 and a microphone 11042, the graphics processor 11041 processing image data of still images or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes at least one of a touch panel 11071 and other input devices 11072. The touch panel 11071 is also referred to as a touch screen. The touch panel 11071 may include two parts, a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1109 may be used to store software programs as well as various data. The memory 1109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1109 may include volatile memory or nonvolatile memory, or the memory 1109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1110 may include one or more processing units; optionally, the processor 1110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 2110.
The embodiment of the application also provides a readable storage medium, wherein the readable storage medium stores a program or an instruction, and the program or the instruction realizes each process of the model training method or the image mole pattern removing method embodiment when being executed by a processor, and can achieve the same technical effect, so that repetition is avoided and redundant description is omitted.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application also provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the model training method or the image mole pattern removing method embodiment can be realized, the same technical effects can be achieved, and the repetition is avoided, so that the description is omitted.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiment of the present application further provides a computer program product, which is stored in a storage medium, and the program product is executed by at least one processor to implement the respective processes of the model training method or the image moire removing method embodiment, and achieve the same technical effects, so that repetition is avoided, and no redundant description is provided herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (16)

1. A method of model training, the method comprising:
acquiring a plurality of moire sample images and corresponding moire-free sample images;
constructing a model to be trained, wherein the model to be trained is a model constructed based on a lightweight network, and the lightweight network comprises a plurality of feature extraction layers with different scales;
respectively inputting the plurality of moire sample images into the model to be trained, and acquiring a first loss according to a predicted image output by a feature extraction layer with the minimum dimension in the model to be trained and a moire-free sample image with the same dimension after downsampling;
updating parameters of the feature extraction layer with the minimum scale according to the first loss until a preset training condition is met;
after training the feature extraction layer with the minimum dimension is completed, the same training process is applied to the feature extraction layer with the last dimension in the model to be trained until training is completed on the feature extraction layer with the maximum dimension, and a target model is obtained.
2. The method of claim 1, further comprising, prior to the step of constructing the model to be trained:
obtaining a PyNET network, deleting a feature extraction layer with a specific scale in the PyNET network, reducing the number of convolution kernel channels of the reserved feature extraction layer to a preset value, and modifying an activation function and a normalization function in the reserved feature extraction layer to obtain a lightweight network, wherein the feature extraction layer with the specific scale is used for extracting features with the specific scale of an input image.
3. The method of claim 2, wherein the obtaining the PyNET network, deleting the feature extraction layer with a specific scale in the PyNET network, reducing the number of convolution kernel channels of the reserved feature extraction layer to a preset value, and modifying the activation function and the normalization function in the reserved feature extraction layer to obtain the lightweight network, comprises:
obtaining a PyNET network, wherein the PyNET network comprises: the input layer, the first feature extraction layer, the second feature extraction layer, the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer are respectively used for extracting features with 5 different scales of an input image, the scale of the features extracted by the ith feature extraction layer is larger than that of the features extracted by the (i+1) th feature extraction layer, and i is more than or equal to 1 and less than or equal to 5;
deleting the first feature extraction layer and the second feature extraction layer in the PyNET network, reserving the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer, adjusting the number of convolution kernel channels of the third feature extraction layer from a first value to a second value, adjusting the number of convolution kernel channels of the fourth feature extraction layer from a third value to a fourth value, and adjusting the number of convolution kernel channels of the fifth feature extraction layer from a fifth value to a sixth value, wherein the first value is larger than the second value, the third value is larger than the fourth value, and the fifth value is larger than the sixth value;
Deleting first normalization functions in the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer, adding second normalization functions in the input layer, and changing activation functions in the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer to hyperbolic tangent functions to obtain a lightweight network, wherein the second normalization functions are used for normalizing pixel values of an input image from a range of (0, 255) to a range of (-1, 1).
4. The method according to claim 3, wherein the plurality of moire sample images are respectively input into the model to be trained, and a first loss is obtained according to a predicted image output by a feature extraction layer with a minimum scale in the model to be trained and a moire-free sample image with the same scale after downsampling; updating parameters of the feature extraction layer with the minimum scale according to the first loss until a preset training condition is met; after training the feature extraction layer with the minimum scale is completed, the same training process is applied to the feature extraction layer with the last scale in the model to be trained until training is completed on the feature extraction layer with the maximum scale, and a target model is obtained, wherein the training process comprises the following steps:
Respectively inputting the plurality of moire sample images into the model to be trained, and acquiring a first loss according to a predicted image output by the fifth feature extraction layer and a moire-free sample image which is subjected to 4 times downsampling in the model to be trained, and updating parameters of the fifth feature extraction layer according to the first loss until convergence to obtain a first intermediate model, wherein the first loss is used for indicating the difference between the predicted image output by the fifth feature extraction layer and the moire-free sample image which is subjected to 4 times downsampling;
respectively inputting the plurality of moire sample images into the first intermediate model, acquiring second loss according to the predicted image output by the fourth feature extraction layer in the first intermediate model and the moire-free sample image subjected to downsampling by 2 times, and updating parameters of the first intermediate model according to the second loss until convergence to obtain a second intermediate model, wherein the second loss is used for indicating the difference between the predicted image output by the fourth feature extraction layer and the moire-free sample image subjected to downsampling by 2 times;
and respectively inputting the plurality of moire sample images into the second intermediate model, acquiring a third loss according to the predicted image output by the third feature extraction layer and the corresponding moire-free sample image in the second intermediate model, and updating model parameters of the second intermediate model according to the third loss until convergence to obtain a target model, wherein the third loss is used for indicating the difference between the predicted image output by the third feature extraction layer and the corresponding moire-free sample image.
5. The method of claim 1, wherein the acquiring a plurality of moire sample images and corresponding moire-free sample images comprises:
obtaining a screenshot from a display device;
shooting a white image displayed on the display device in a focusing state of the camera to obtain a first moire image, and generating a moire sample image according to the screenshot, the white image and the first moire image;
and shooting the white image displayed on the display device in the out-of-focus state of the camera to obtain a first moire-free image, and generating a moire-free sample image corresponding to the moire sample image according to the screenshot, the white image and the first moire-free image.
6. The method of claim 5, wherein generating a moire sample image from the screen shot, the white image, and the first moire image comprises:
acquiring RGB value I of each pixel point in the screenshot bg RGB value I of each pixel point in the white image 0 And RGB value I of each pixel point in the first moire image moire1
According to the I 0 And I moire1 Calculate Moire noise I moire-feature
According to the I moire-feature And I bg Calculating RGB value I of each pixel point in the Moire sample image moire2 According to the I moire2 Generating the moire sample image;
and generating a moire-free sample image corresponding to the moire sample image according to the screenshot, the white image and the first moire-free image, including:
acquiring RGB values I of each pixel point in the first moire-free image clean1
According to the I clean1 And I 0 Calculate Moire noise free I clean-feature
According to the I clean-feature And I bg Calculating RGB value I of each pixel point in the moire-free sample image corresponding to the moire-free sample image clean2 According to the I clean2 And generating the moire-free sample image.
7. An image moire removing method for performing moire removing processing based on the object model generated in any one of claims 1 to 6, characterized by comprising:
receiving a second moire image to be processed;
under the condition that the size of the second moire image exceeds the identifiable maximum size of the target model, segmenting the second moire image into N moire sub-images, wherein each sub-image in the N moire sub-images is overlapped with the adjacent sub-images, and N is an integer larger than 1;
Respectively inputting the N mole pattern sub-images into the target model for processing to obtain N mole pattern-free sub-images;
and performing stitching treatment on the N non-mole pattern sub-images, and performing pixel weighted average operation on an overlapped area in the stitching process to obtain a second non-mole pattern image corresponding to the second mole pattern image.
8. A model training apparatus, the apparatus comprising:
the acquisition module is used for acquiring a plurality of moire sample images and corresponding moire-free sample images;
the building module is used for building a model to be trained, wherein the model to be trained is a model built based on a lightweight network, and the lightweight network comprises a plurality of feature extraction layers with different scales;
the training module is used for respectively inputting the plurality of moire sample images into the model to be trained, and acquiring a first loss according to a predicted image output by a feature extraction layer with the minimum dimension in the model to be trained and a moire-free sample image with the same dimension after downsampling;
updating parameters of the feature extraction layer with the minimum scale according to the first loss until a preset training condition is met;
After training the feature extraction layer with the minimum dimension is completed, the same training process is applied to the feature extraction layer with the last dimension in the model to be trained until training is completed on the feature extraction layer with the maximum dimension, and a target model is obtained.
9. The apparatus of claim 8, wherein the apparatus further comprises:
the generation module is used for acquiring the PyNET network, deleting a characteristic extraction layer with a specific scale in the PyNET network, reducing the number of convolution kernel channels of the reserved characteristic extraction layer to a preset value, and modifying an activation function and a normalization function in the reserved characteristic extraction layer to obtain a lightweight network, wherein the characteristic extraction layer with the specific scale is used for extracting the characteristic with the specific scale of an input image.
10. The apparatus of claim 9, wherein the generating module comprises:
the first obtaining submodule is configured to obtain a PyNET network, where the PyNET network includes: the input layer, the first feature extraction layer, the second feature extraction layer, the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer are respectively used for extracting features with 5 different scales of an input image, the scale of the features extracted by the ith feature extraction layer is larger than that of the features extracted by the (i+1) th feature extraction layer, and i is more than or equal to 1 and less than or equal to 5;
A first modification submodule, configured to delete the first feature extraction layer and the second feature extraction layer in the PyNET network, reserve the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer, adjust the number of convolution kernel channels of the third feature extraction layer from a first value to a second value, adjust the number of convolution kernel channels of the fourth feature extraction layer from a third value to a fourth value, and adjust the number of convolution kernel channels of the fifth feature extraction layer from a fifth value to a sixth value, where the first value is greater than the second value, the third value is greater than the fourth value, and the fifth value is greater than the sixth value;
and a second modification submodule, configured to delete a first normalization function in the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer, add a second normalization function in the input layer, and change an activation function in the third feature extraction layer, the fourth feature extraction layer and the fifth feature extraction layer to a hyperbolic tangent function, so as to obtain a lightweight network, where the second normalization function is used to normalize a pixel value of an input image from a range of (0, 255) to a range of (-1, 1).
11. The apparatus of claim 10, wherein the training module comprises:
the first training submodule is used for respectively inputting the plurality of moire sample images into the model to be trained, acquiring first loss according to the predicted image output by the fifth feature extraction layer in the model to be trained and the moire-free sample image after 4 times of downsampling, updating parameters of the fifth feature extraction layer according to the first loss until convergence to obtain a first intermediate model, wherein the first loss is used for indicating the difference between the predicted image output by the fifth feature extraction layer and the moire-free sample image after 4 times of downsampling;
the second training submodule is used for respectively inputting the plurality of moire sample images into the first intermediate model, acquiring second loss according to the predicted image output by the fourth feature extraction layer in the first intermediate model and the 2-time downsampled moire-free sample image, and updating parameters of the first intermediate model according to the second loss until convergence to obtain a second intermediate model, wherein the second loss is used for indicating the difference between the predicted image output by the fourth feature extraction layer and the 2-time downsampled moire-free sample image;
And the third training sub-module is used for respectively inputting the plurality of moire sample images into the second intermediate model, acquiring a third loss according to the predicted image output by the third feature extraction layer in the second intermediate model and the corresponding moire-free sample image, updating the model parameters of the second intermediate model according to the third loss until convergence to obtain a target model, wherein the third loss is used for indicating the difference between the predicted image output by the third feature extraction layer and the corresponding moire-free sample image.
12. The apparatus of claim 8, wherein the acquisition module comprises:
a second acquisition sub-module for acquiring a screenshot from the display device;
the first generation sub-module is used for shooting a white image displayed on the display device in a camera focusing state to obtain a first moire image, and generating a moire sample image according to the screen shot, the white image and the first moire image;
the second generation submodule is used for shooting the white image displayed on the display device in the camera defocusing state to obtain a first moire-free image, and generating a moire-free sample image corresponding to the moire sample image according to the screenshot, the white image and the first moire-free image.
13. The apparatus of claim 12, wherein the first generation sub-module comprises:
a first obtaining unit, configured to obtain RGB values I of each pixel point in the screenshot bg RGB value I of each pixel point in the white image 0 And RGB value I of each pixel point in the first moire image moire1
A first calculation unit for calculating the first calculation result according to the I 0 And I moire1 Calculate Moire noise I moire-feature
A first generation unit for generating a first output signal according to the I moire-feature And I bg Calculating RGB value I of each pixel point in the Moire sample image moire2 According to the I moire2 Generating the moire sample image;
the second generating submodule includes:
a second obtaining unit, configured to obtain RGB values I of each pixel point in the first moire-free image clean1
A second calculation unit for calculating the I clean1 And I 0 Calculate Moire noise free I clean-feature
A second generation unit for generating a second output signal according to the I clean-feature And I bg Calculating RGB value I of each pixel point in the moire-free sample image corresponding to the moire-free sample image clean2 According to the I clean2 And generating the moire-free sample image.
14. An image moire removing device for performing moire removing processing based on the object model generated in any one of claims 8 to 13, characterized by comprising:
The receiving module is used for receiving a second moire image to be processed;
the segmentation module is used for segmenting the second moire image into N mole pattern sub-images under the condition that the size of the second mole pattern image exceeds the identifiable maximum size of the target model, wherein each sub-image in the N mole pattern sub-images has regional overlapping with the adjacent sub-images, and N is an integer larger than 1;
the first processing module is used for respectively inputting the N mole pattern sub-images into the target model for processing to obtain N mole pattern-free sub-images;
and the second processing module is used for carrying out splicing processing on the N non-mole pattern sub-images, and carrying out pixel weighted average operation on an overlapped area in the splicing process to obtain a second non-mole pattern image corresponding to the second mole pattern image.
15. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, the program or instructions implementing the steps of the model training method of any one of claims 1 to 6 when executed by the processor, or the steps of the image anti-moire method of claim 7 when executed by the processor.
16. A readable storage medium, wherein a program or instructions is stored on the readable storage medium, which when executed by a processor, implements the steps of the model training method according to any one of claims 1 to 6, or which when executed by the processor, implements the steps of the image anti-moire method according to claim 7.
CN202210118889.8A 2022-02-08 2022-02-08 Model training method, image mole pattern removing method and device and electronic equipment Pending CN116612015A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210118889.8A CN116612015A (en) 2022-02-08 2022-02-08 Model training method, image mole pattern removing method and device and electronic equipment
PCT/CN2023/074325 WO2023151511A1 (en) 2022-02-08 2023-02-03 Model training method and apparatus, image moire removal method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210118889.8A CN116612015A (en) 2022-02-08 2022-02-08 Model training method, image mole pattern removing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116612015A true CN116612015A (en) 2023-08-18

Family

ID=87563605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210118889.8A Pending CN116612015A (en) 2022-02-08 2022-02-08 Model training method, image mole pattern removing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN116612015A (en)
WO (1) WO2023151511A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291857A (en) * 2023-11-27 2023-12-26 武汉精立电子技术有限公司 Image processing method, moire eliminating equipment and moire eliminating device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117333399B (en) * 2023-10-27 2024-04-23 天津大学 Raw domain image and video mole pattern removing method based on channel and spatial modulation
CN117611422B (en) * 2024-01-23 2024-05-07 暨南大学 Image steganography method based on Moire pattern generation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7176969B2 (en) * 2001-12-13 2007-02-13 International Business Machines Corporation System and method for anti-moire imaging in a one dimensional sensor array
CN111598796B (en) * 2020-04-27 2023-09-05 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and storage medium
CN113592742A (en) * 2021-08-09 2021-11-02 天津大学 Method for removing image moire

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291857A (en) * 2023-11-27 2023-12-26 武汉精立电子技术有限公司 Image processing method, moire eliminating equipment and moire eliminating device
CN117291857B (en) * 2023-11-27 2024-03-22 武汉精立电子技术有限公司 Image processing method, moire eliminating equipment and moire eliminating device

Also Published As

Publication number Publication date
WO2023151511A1 (en) 2023-08-17

Similar Documents

Publication Publication Date Title
CN115442515B (en) Image processing method and apparatus
WO2022110638A1 (en) Human image restoration method and apparatus, electronic device, storage medium and program product
Gallo et al. Artifact-free high dynamic range imaging
KR20210114856A (en) Systems and methods for image denoising using deep convolutional neural networks
WO2020152521A1 (en) Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures
CN116612015A (en) Model training method, image mole pattern removing method and device and electronic equipment
CN112837245B (en) Dynamic scene deblurring method based on multi-mode fusion
CN113076685B (en) Training method of image reconstruction model, image reconstruction method and device thereof
CN111402146A (en) Image processing method and image processing apparatus
CN112132769A (en) Image fusion method and device and computer equipment
Rasheed et al. LSR: Lightening super-resolution deep network for low-light image enhancement
Lv et al. Low-light image enhancement via deep Retinex decomposition and bilateral learning
CN114897916A (en) Image processing method and device, nonvolatile readable storage medium and electronic equipment
Zhang et al. Deep motion blur removal using noisy/blurry image pairs
CN111951373A (en) Method and equipment for processing face image
JP7543080B2 (en) Trained model and data processing device
CN114782280A (en) Image processing method and device
CN113192101B (en) Image processing method, device, computer equipment and storage medium
CN115311149A (en) Image denoising method, model, computer-readable storage medium and terminal device
CN116055895B (en) Image processing method and device, chip system and storage medium
CN115496664A (en) Model training method and device, electronic equipment and readable storage medium
CN111080543A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112132879A (en) Image processing method, device and storage medium
CN116740777B (en) Training method of face quality detection model and related equipment thereof
CN116862778A (en) Image restoration method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination