Nothing Special   »   [go: up one dir, main page]

CN110335237B - Method and device for generating model and method and device for recognizing image - Google Patents

Method and device for generating model and method and device for recognizing image Download PDF

Info

Publication number
CN110335237B
CN110335237B CN201910371718.4A CN201910371718A CN110335237B CN 110335237 B CN110335237 B CN 110335237B CN 201910371718 A CN201910371718 A CN 201910371718A CN 110335237 B CN110335237 B CN 110335237B
Authority
CN
China
Prior art keywords
sample
image
preset
images
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910371718.4A
Other languages
Chinese (zh)
Other versions
CN110335237A (en
Inventor
陈奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910371718.4A priority Critical patent/CN110335237B/en
Publication of CN110335237A publication Critical patent/CN110335237A/en
Application granted granted Critical
Publication of CN110335237B publication Critical patent/CN110335237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Embodiments of the present disclosure disclose methods and apparatus for generating models. One embodiment of the method comprises: acquiring a preset training sample set, wherein training samples in the training sample set comprise a sample image set and a sample identification result, the sample image set comprises a preset number of sample images, the sample identification result comprises a preset number of sample quality values, and the sample quality values correspond to the sample images and are used for representing the quality degree of the corresponding sample images compared with other sample images in the sample image set; and (3) by utilizing a machine learning method, taking the sample image set as input, taking the sample recognition result as expected output, training the initial model, and determining the trained initial model as an image quality recognition model. The embodiment can generate a more accurate image quality identification model, and is beneficial to more accurately and objectively outputting the image with the quality meeting the preset requirement by using the generated image quality identification model.

Description

Method and device for generating model and method and device for recognizing image
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method and an apparatus for generating a model and a method and an apparatus for recognizing an image.
Background
The quality of an image may be determined by various factors, such as the hue, sharpness, or position distribution of objects in the image. In practice, images are scored by using a pre-trained model, so as to determine the quality of the images.
In the prior art, when training a model for scoring the quality of an image, a sample image needs to be obtained first, then the sample image is manually scored to obtain a sample score, and then the model is trained by using the sample image and the sample score.
Disclosure of Invention
Embodiments of the present disclosure propose methods and apparatuses for generating models, and methods and apparatuses for recognizing images.
In a first aspect, an embodiment of the present disclosure provides a method for generating a model, the method including: acquiring a preset training sample set, wherein training samples in the training sample set comprise a sample image set and a sample identification result predetermined for the sample image set, the sample image set comprises a preset number of sample images, the sample identification result comprises a preset number of sample quality values, and the sample quality values in the preset number of sample quality values correspond to the sample images in the sample image set and are used for representing the quality degree of the corresponding sample images compared with other sample images in the sample image set; and training the initial model by using a machine learning method by taking a sample image set included in the training samples in the training sample set as the input of the initial model and taking a sample recognition result corresponding to the input sample image set as the expected output of the initial model, and determining the trained initial model as an image quality recognition model.
In some embodiments, the sample recognition result corresponding to the sample image set in the training sample is determined by: determining a quality order of sample images in the sample image set; assigning a sample quality value to a sample image of the set of sample images based on the determined order of merits; a sample identification result corresponding to the set of sample images is generated that includes the assigned sample quality value.
In a second aspect, an embodiment of the present disclosure provides a method for recognizing an image, the method including: acquiring an image to be identified; using the image to be recognized, performing the following recognition steps: the image to be recognized is input into the image quality recognition model generated by the method described in the above first aspect, and the recognition result is obtained, wherein the recognition result includes a quality value, and the quality value is used for representing the quality degree of the input image to be recognized.
In some embodiments, the method further comprises: determining whether the size of the quality value in the obtained identification result meets a preset condition; and in response to the fact that the size of the quality value in the obtained identification result meets the preset condition, sending the image to be identified to the user terminal in communication connection, and controlling the user terminal to display the image to be identified.
In some embodiments, acquiring the image to be recognized includes: acquiring a preset image set; and selecting a preset image from the preset image set as an image to be identified.
In some embodiments, the identifying step further comprises: determining whether the preset image set comprises unselected preset images; and in response to determining that the preset image set does not comprise the unselected preset images, determining a result image corresponding to the preset image set based on the selected image to be identified.
In some embodiments, the method further comprises: in response to the fact that the preset image set comprises the unselected preset images, reselecting the preset images from the unselected preset images in the preset image set as the images to be recognized; and continuing to execute the identification step by utilizing the image to be identified which is selected for the last time.
In some embodiments, determining, based on the selected image to be recognized, a result image corresponding to the preset image set includes: and according to the magnitude sequence of the quality values corresponding to the selected images to be recognized, extracting the images to be recognized from the selected images to be recognized as result images corresponding to a preset image set.
In a third aspect, an embodiment of the present disclosure provides an apparatus for generating a model, the apparatus including: the device comprises a sample acquisition unit and a comparison unit, wherein the sample acquisition unit is configured to acquire a preset training sample set, training samples in the training sample set comprise a sample image set and a sample identification result predetermined for the sample image set, the sample image set comprises a preset number of sample images, the sample identification result comprises a preset number of sample quality values, and the sample quality values in the preset number of sample quality values correspond to the sample images in the sample image set and are used for representing the quality degree of the corresponding sample images compared with other sample images in the sample image set; and the model training unit is configured to utilize a machine learning method to take a sample image set included in training samples in the training sample set as the input of the initial model, take a sample recognition result corresponding to the input sample image set as the expected output of the initial model, train the initial model, and determine the trained initial model as the image quality recognition model.
In some embodiments, the sample recognition result corresponding to the sample image set in the training sample is determined by: determining a quality order of sample images in the sample image set; assigning a sample quality value to a sample image of the set of sample images based on the determined order of merits; a sample identification result corresponding to the set of sample images is generated that includes the assigned sample quality value.
In a fourth aspect, an embodiment of the present disclosure provides an apparatus for recognizing an image, the apparatus including: an image acquisition unit configured to acquire an image to be recognized; a first recognition unit configured to perform, using an image to be recognized, the following recognition steps: the image to be recognized is input into the image quality recognition model generated by the method described in the above first aspect, and the recognition result is obtained, wherein the recognition result includes a quality value, and the quality value is used for representing the quality degree of the input image to be recognized.
In some embodiments, the apparatus further comprises: a determination unit configured to determine whether a magnitude of a quality value in the obtained recognition result satisfies a preset condition; and the sending unit is configured to respond to the fact that the size of the quality value in the obtained identification result meets the preset condition, send the image to be identified to the user terminal of the communication connection, and control the user terminal to display the image to be identified.
In some embodiments, the image acquisition unit comprises: an acquisition module configured to acquire a preset image set; and the selecting module is configured to select a preset image from the preset image set as the image to be identified.
In some embodiments, the identifying step further comprises: determining whether the preset image set comprises unselected preset images; and in response to determining that the preset image set does not comprise the unselected preset images, determining a result image corresponding to the preset image set based on the selected image to be identified.
In some embodiments, the apparatus further comprises: the image recognition device comprises a selecting unit and a recognition unit, wherein the selecting unit is configured to respond to the fact that the preset image set comprises unselected preset images, and reselect the preset images from the unselected preset images contained in the preset image set as images to be recognized; and the second identification unit is configured to continue to execute the identification step by utilizing the image to be identified which is selected last time.
In some embodiments, determining, based on the selected image to be recognized, a result image corresponding to the preset image set includes: and according to the magnitude sequence of the quality values corresponding to the selected images to be recognized, extracting the images to be recognized from the selected images to be recognized as result images corresponding to a preset image set.
In a fifth aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method according to any one of the embodiments of the method described in the first or second aspects above.
In a sixth aspect, embodiments of the present disclosure provide a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method of any of the embodiments of the methods described in the first or second aspects above.
The method and the device for generating the model provided by the embodiments of the present disclosure obtain a preset training sample set, where training samples in the training sample set include a sample image set and a sample identification result predetermined for the sample image set, the sample image set includes a preset number of sample images, the sample identification result includes a preset number of sample quality values, the sample quality values in the preset number of sample quality values correspond to the sample images in the sample image set and are used for characterizing the degree of goodness and badness of the quality of the corresponding sample images compared with other sample images in the sample image set, and then use a machine learning method to take the sample image set included in the training samples in the training sample set as an input of an initial model, and take the sample identification result corresponding to the input sample image set as an expected output of the initial model, the initial model is trained, and the trained initial model is determined as the image quality recognition model, so that when the image quality recognition model is trained, the sample quality value of the sample image can be determined in an image comparison mode.
The method and the device for recognizing the image, provided by the embodiment of the disclosure, execute the following recognition steps by acquiring the image to be recognized and then utilizing the image to be recognized: the image to be recognized is input into the image quality recognition model generated by the method described in the first aspect, and a recognition result is obtained, wherein the recognition result includes a quality value used for representing the quality degree of the input image to be recognized, so that the image quality recognition model generated by the method described in the first aspect can be used for recognizing the quality of the image, a more accurate recognition result can be obtained, and the image with the quality meeting the preset requirement can be output more accurately and objectively.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for generating a model according to the present disclosure;
FIG. 3 is a schematic illustration of one application scenario of a method for generating a model according to an embodiment of the present disclosure;
FIG. 4 is a flow diagram of one embodiment of a method for recognizing an image according to the present disclosure;
FIG. 5 is a schematic diagram of an embodiment of an apparatus for generating models according to the present disclosure;
FIG. 6 is a schematic block diagram illustrating one embodiment of an apparatus for recognizing images according to the present disclosure;
FIG. 7 is a schematic block diagram of a computer system suitable for use with an electronic device implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the method for generating a model, the apparatus for generating a model, the method for recognizing an image, or the apparatus for recognizing an image of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as an image processing application, a web browser application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices, including but not limited to smart phones, tablet computers, electronic book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as an image processing server that processes images to be recognized transmitted by the terminal apparatuses 101, 102, 103. The image processing server may perform processing such as analysis on the received data of the image to be recognized and the like, and obtain a processing result (e.g., a recognition result).
It should be noted that the method for generating the model provided by the embodiment of the present disclosure may be executed by the server 105, or may be executed by the terminal devices 101, 102, and 103, and accordingly, the apparatus for generating the model may be disposed in the server 105, or may be disposed in the terminal devices 101, 102, and 103; furthermore, the method for recognizing the image provided by the embodiment of the disclosure may be executed by the server 105, and may also be executed by the terminal devices 101, 102, 103, and accordingly, the apparatus for recognizing the image may be disposed in the server 105, and may also be disposed in the terminal devices 101, 102, 103.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for generating a model according to the present disclosure is shown. The method for generating the model comprises the following steps:
step 201, a preset training sample set is obtained.
In this embodiment, the execution subject of the method for generating a model (e.g., the server 105 shown in fig. 1) may obtain a preset training sample set from a remote or local place through a wired connection or a wireless connection. The training samples in the training sample set comprise a sample image set and a sample recognition result predetermined for the sample image set. The sample image set includes a preset number of sample images. The sample identification result includes a preset number of sample quality values. The sample quality value in the preset number of sample quality values corresponds to a sample image in the sample image set and is used for representing the quality degree of the corresponding sample image compared with other sample images in the sample image set. Specifically, the larger the sample quality value is, the better the quality of the sample image can be represented; alternatively, the smaller the sample quality value, the better the quality of the representative sample image. Here, the preset number may be a predetermined number equal to or greater than 2.
In some optional implementation manners of this embodiment, the sample recognition result corresponding to the sample image set in the training sample may be determined by the executing subject or other electronic device through the following steps: firstly, determining the quality sequence of sample images in a sample image set; then, based on the determined quality sequence, allocating a sample quality value to the sample images in the sample image set; finally, a sample identification result corresponding to the sample image set and including the assigned sample quality value is generated.
Herein, various methods may be adopted to determine the quality order of the sample images in the sample image set, for example, the definition of each sample image in the sample image set may be determined separately, and then the quality order of the sample images may be determined according to the order of the definition, specifically, the higher the definition is, the better the quality is; alternatively, the sample image set may be output, and information indicating a quality order of the sample images in the sample image set, which is input by a user, may be acquired.
In practice, various methods may be employed to determine the sharpness of the sample image, and the sharpness of the sample image may be determined based on a gradient function (e.g., an energy gradient function, Brenner gradient function, etc.), an SMD (grayscale variance) function, an entropy function, etc., as examples.
In this implementation, the execution subject or other electronic device may assign a sample quality value to a sample image in the sample image set based on the determined order of merits. Specifically, when the quality of the representative sample image is higher and higher as the sample quality value is higher, the sample quality value may be assigned to the sample image in the sample image set, so that the sample quality value corresponding to the sample image with high quality is higher than the sample quality value corresponding to the sample image with low quality; when the smaller the sample quality value is, the better the quality of the representative sample image is, the sample quality value may be assigned to the sample image in the sample image set, so that the sample quality value corresponding to the sample image with the better quality is smaller than the sample quality value corresponding to the sample image with the worse quality.
As an example, a sample image set of a certain training sample includes three sample images, sample image a, sample image B, and sample image C, respectively. And determining the quality sequence of the three sample images as follows through comparison: the quality of the sample image B is optimal, the sample image a times, and the sample image C is the worst. In this example, it may be specified that the larger the sample quality value is, the better the quality of the characterized sample image is, and further, the sample quality values may be respectively assigned to the sample image B, the sample image a, and the sample image C, so that the sample quality value corresponding to the sample image B is the largest, the sample quality value corresponding to the sample image a is the next largest, and the sample quality value corresponding to the sample image C is the smallest.
It should be noted that the specific size of the sample quality value assigned to the sample image in the sample image set may be any size. For example, continuing the above example, three sample quality values of different sizes may be preset, and then, according to the determined order of merits of the qualities, the three sample quality values may be respectively assigned to the sample image B, the sample image a, and the sample image C; alternatively, three values may be randomly selected from a preset value range as three sample quality values, and then the three sample quality values are respectively assigned to the sample images according to the determined order of merits of the quality.
Step 202, using a machine learning method, using a sample image set included in training samples in a training sample set as an input of an initial model, using a sample recognition result corresponding to the input sample image set as an expected output of the initial model, training the initial model, and determining the trained initial model as an image quality recognition model.
In this embodiment, based on the training sample set obtained in step 201, the executing entity may use a machine learning method to input a sample image set included in the training samples in the training sample set as an initial model, use a sample recognition result corresponding to the input sample image set as an expected output of the initial model, train the initial model, and determine the trained initial model as an image quality recognition model.
Here, various existing convolutional neural network structures may be used as the initial model for training. The convolutional neural network is a feed-forward neural network, and the artificial neurons of the convolutional neural network can respond to a part of surrounding units within a coverage range and have excellent performance on image processing, so that the convolutional neural network can be used for identifying sample images in a sample image set included in a training sample. It should be noted that other models with image processing functions may also be used as the initial model, and the model is not limited to the convolutional neural network, and the specific model structure may be set according to actual requirements, and is not limited herein.
Specifically, in the training process, the executing entity may first input the sample image set into the initial model to obtain an actual recognition result, and then calculate a difference between the obtained actual recognition result and the sample recognition result corresponding to the input sample image set by using a preset loss function, for example, an L2 norm may be used as the loss function to calculate a difference between the obtained actual recognition result and the sample recognition result. And further, determining whether the calculated difference is smaller than or equal to a preset threshold value, and if the calculated difference is smaller than or equal to the preset threshold value, determining that the initial model training is finished.
Specifically, if the calculated difference is not less than or equal to the preset threshold, it indicates that the training of the initial model does not reach the predetermined optimization goal, and at this time, the initial model may be adjusted based on the calculated difference, and then the initial model is continuously trained by using the unused training samples in the training sample set until the optimization goal is reached. Here, various implementations may be employed to adjust the initial model based on the calculated differences. For example, the initial model can be adjusted using a BP (Back Propagation) algorithm and an SGD (Stochastic Gradient Descent) algorithm.
With continuing reference to FIG. 3, FIG. 3 is a schematic illustration of an application scenario of the method for generating a model according to the present embodiment. In the application scenario of fig. 3, the server 301 may first obtain a preset training sample set 302, where the training samples in the training sample set 302 include a sample image set and a sample recognition result predetermined for the sample image set. The sample image set comprises three (a preset number of) sample images, the sample identification result comprises three (a preset number of) sample quality values, and the sample quality value in the three sample quality values corresponds to the sample image in the sample image set and is used for representing the quality degree of the corresponding sample image compared with other sample images in the sample image set. Then, the server 301 may train the initial model 303 by using a machine learning method, with the sample image set included in the training samples in the training sample set 302 as an input of the initial model 303, and with the sample recognition result corresponding to the input sample image set as an expected output of the initial model 303, and determine the trained initial model 303 as the image quality recognition model 304.
According to the method provided by the embodiment of the disclosure, when the image quality recognition model is trained, the sample quality value of the sample image can be determined in an image comparison mode, and compared with a labeling mode without a reference object in the prior art, the method has higher objectivity.
With further reference to FIG. 4, a flow 400 of one embodiment of a method for recognizing an image is shown. The flow 400 of the method for recognizing an image comprises the steps of:
step 401, acquiring an image to be identified.
In the present embodiment, an execution subject (for example, the server 105 shown in fig. 1) of the method for recognizing an image may acquire an image to be recognized from a remote or local place by a wired connection manner or a wireless connection manner. Here, the image to be recognized is an image whose quality is to be recognized.
In some optional implementations of the embodiment, the executing body may acquire the image to be recognized by: first, the execution subject may acquire a preset image set. Then, the execution subject may select a preset image from a preset image set as an image to be recognized.
Here, the preset image set may be a predetermined image set composed of various images. Specifically, the executing body may obtain the preset image set from a remote place or a local place.
In this implementation manner, the execution main body may select the preset image from the preset image set by using various methods, for example, may select the preset image by using a random selection method; or a preset image with the highest definition may be selected.
Step 402, using the image to be identified, executing the following identification steps: and inputting the image to be recognized into the image quality recognition model to obtain a recognition result.
In this embodiment, using the image to be recognized obtained in step 401, the executing entity may execute the following recognition steps: and inputting the image to be recognized into the image quality recognition model to obtain a recognition result. Wherein the identification result comprises a quality value. The quality value is used for representing the quality degree of the image to be recognized of the input image quality recognition model. Specifically, the larger the quality value is, the better the quality of the image to be identified can be represented; alternatively, the smaller the quality value, the better the quality characterizing the image to be recognized.
In this embodiment, the image quality recognition model may be used to characterize the correspondence between the image and the recognition result corresponding to the image. Specifically, the image quality identification model is generated by using the method in the embodiment corresponding to fig. 2, and specific details may refer to the related description in the embodiment corresponding to fig. 2, which is not described herein again.
In some optional implementation manners of this embodiment, after obtaining the recognition result, the executing body may further perform the following steps: first, the execution body described above may determine whether the magnitude of the quality value in the obtained recognition result satisfies a preset condition. Then, the execution subject may send the image to be recognized to the user terminal of the communication connection and control the user terminal to display the image to be recognized in response to determining that the size of the quality value in the obtained recognition result satisfies the preset condition.
Wherein the preset condition can be used for limiting the excellent degree required by the quality of the image to be identified to be sent to the user terminal. For example, when the quality value is larger and the quality characterizing the image to be recognized is better, the preset condition may be that the quality value in the recognition result is greater than or equal to a first preset quality threshold. The first preset quality threshold value indicates the minimum quality value meeting the preset condition, and corresponds to the minimum required excellent degree; the smaller the quality value is, the better the quality of the image to be identified is represented, and the preset condition may be that the quality value in the identification result is less than or equal to a second preset quality threshold. Wherein the second predetermined quality threshold value indicates the largest quality value satisfying the predetermined condition, which also corresponds to the lowest required goodness.
In this implementation, the user terminal may be a terminal used by a user and communicatively connected to the execution main body. In practice, the execution main body may send a control signal to the user terminal, so as to control the user terminal to display the image to be recognized.
The realization mode can send the image to be recognized with the corresponding quality value meeting the preset condition to the user terminal and control the user terminal to display the image to be recognized, so that the user terminal can be controlled to display the image to be recognized with the quality value meeting the preset condition, and further, the display effect of the image can be improved; in addition, because a more accurate quality value can be obtained by using the image quality identification model, the realization mode can more accurately control the display of the image to be identified by the user terminal.
In some optional implementation manners of this embodiment, when the image to be recognized is an image selected from the preset image set, the recognizing step may further include: determining whether the preset image set comprises unselected preset images; and in response to determining that the preset image set does not comprise the unselected preset images, determining a result image corresponding to the preset image set based on the selected image to be identified.
In this implementation, the resulting image may be an image with the best quality (e.g., the largest corresponding quality value) in the preset image set, or may also be an image with quality meeting a preset requirement (e.g., the corresponding quality value is greater than or equal to a preset quality threshold) in the preset image set.
Here, the preset image set does not include unselected preset images, which means that the selected image to be recognized only includes one image (that is, the preset image set includes only one preset image), and at this time, the execution subject may directly determine the image to be recognized as the result image.
In some optional implementation manners of this embodiment, the executing body may further reselect the preset image from the unselected preset images included in the preset image set as the image to be recognized in response to determining that the preset image set includes the unselected preset images; and continuing to execute the identification step by utilizing the image to be identified which is selected for the last time. Therefore, all the preset images in the preset image set can be identified by the image quality identification model through circulation, and then the result image corresponding to the preset image set can be determined based on the identification result.
In this implementation manner, the preset image set includes unselected preset images, which indicates that the selected to-be-recognized image includes at least two images (that is, the preset image set includes at least two preset images), and at this time, various methods may be used to select the to-be-recognized image from the at least two selected to-be-recognized images as a result image of the preset image set.
In some optional implementation manners of this embodiment, the execution main body may extract the image to be recognized from the selected image to be recognized as a result image corresponding to the preset image set according to a magnitude order of the quality values corresponding to the selected image to be recognized.
Specifically, as an example, the larger the quality value is, the better the quality of the image to be identified is represented, and at this time, the image to be identified with the largest quality value may be extracted from the selected images to be identified as a result image corresponding to the preset image set according to the order of the quality values; or, an image to be recognized with a quality value greater than or equal to a preset threshold may be extracted from the selected image to be recognized as a result image corresponding to the preset image set.
According to the method provided by the embodiment of the disclosure, the image quality identification model generated in the embodiment corresponding to fig. 2 can be used for identifying the quality of the image, so that a more accurate identification result can be obtained, and the method is helpful for more accurately and objectively outputting the image with the quality meeting the preset requirement.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for generating a model, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for generating a model of the present embodiment includes: a sample acquisition unit 501 and a model training unit 502. The sample acquiring unit 501 is configured to acquire a preset training sample set, where training samples in the training sample set include a sample image set and a sample identification result predetermined for the sample image set, the sample image set includes a preset number of sample images, the sample identification result includes a preset number of sample quality values, and a sample quality value in the preset number of sample quality values corresponds to a sample image in the sample image set and is used for representing a degree of goodness of quality of the corresponding sample image compared with other sample images in the sample image set; the model training unit 502 is configured to train the initial model by using a machine learning method, with a sample image set included in training samples in the training sample set as an input of the initial model, with a sample recognition result corresponding to the input sample image set as an expected output of the initial model, and determine the trained initial model as an image quality recognition model.
In this embodiment, the sample acquiring unit 501 of the apparatus 500 for generating a model may acquire a preset training sample set from a remote location or a local location through a wired connection manner or a wireless connection manner. The training samples in the training sample set comprise a sample image set and a sample recognition result predetermined for the sample image set. The sample image set includes a preset number of sample images. The sample identification result includes a preset number of sample quality values. The sample quality value in the preset number of sample quality values corresponds to a sample image in the sample image set and is used for representing the quality degree of the corresponding sample image compared with other sample images in the sample image set.
In this embodiment, based on the training sample set obtained by the sample obtaining unit 501, the model training unit 502 may use a machine learning method to input a sample image set included in the training samples in the training sample set as an initial model, use a sample recognition result corresponding to the input sample image set as an expected output of the initial model, train the initial model, and determine the trained initial model as an image quality recognition model.
In some embodiments, the sample recognition result corresponding to the sample image set in the training sample is determined by: determining a quality order of sample images in the sample image set; assigning a sample quality value to a sample image of the set of sample images based on the determined order of merits; a sample identification result corresponding to the set of sample images is generated that includes the assigned sample quality value.
It will be understood that the elements described in the apparatus 500 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and are not described herein again.
The device 500 provided by the above embodiment of the present disclosure may determine the sample quality value of the sample image in an image comparison manner when training the image quality recognition model, and has higher objectivity compared to a labeling manner without a reference object in the prior art, and further, the embodiment of the present disclosure may generate a more accurate image quality recognition model, which is helpful for recognizing the quality of the image more accurately by using the generated image quality recognition model, and further output the image with the quality meeting the preset requirement more accurately and objectively.
With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for recognizing an image, which corresponds to the method embodiment shown in fig. 4, and which is particularly applicable in various electronic devices.
As shown in fig. 6, the apparatus 600 for recognizing an image of the present embodiment includes: an image acquisition unit 601 and a first recognition unit 602. Wherein the image acquisition unit 601 is configured to acquire an image to be recognized; the first recognition unit 602 is configured to perform the following recognition steps using the image to be recognized: inputting an image to be recognized into an image quality recognition model generated by adopting the method of any one of the embodiments corresponding to fig. 2, and obtaining a recognition result, wherein the recognition result comprises a quality value used for representing the quality degree of the input image to be recognized.
In this embodiment, the image acquiring unit 601 of the apparatus 600 for recognizing an image may acquire an image to be recognized from a remote or local place by a wired connection manner or a wireless connection manner. Here, the image to be recognized is an image whose quality is to be recognized.
In this embodiment, with the image to be recognized obtained by the image obtaining unit 601, the first recognition unit 602 may perform the following recognition steps: and inputting the image to be recognized into the image quality recognition model to obtain a recognition result. Wherein the identification result comprises a quality value. The quality value is used for representing the quality degree of the image to be recognized of the input image quality recognition model. Specifically, the larger the quality value is, the better the quality of the image to be identified can be represented; alternatively, the smaller the quality value, the better the quality characterizing the image to be recognized.
In this embodiment, the image quality recognition model may be used to characterize the correspondence between the image and the recognition result corresponding to the image. Specifically, the image quality identification model is generated by using the method in the embodiment corresponding to fig. 2, and specific details may refer to the related description in the embodiment corresponding to fig. 2, which is not described herein again.
In some optional implementations of this embodiment, the apparatus 600 may further include: a determination unit (not shown in the drawings) configured to determine whether the magnitude of the quality value in the obtained recognition result satisfies a preset condition; and a sending unit (not shown in the figure) configured to send the image to be recognized to the user terminal of the communication connection and control the user terminal to display the image to be recognized in response to determining that the magnitude of the quality value in the obtained recognition result satisfies a preset condition.
In some optional implementations of this embodiment, the image obtaining unit 601 may include: an acquisition module (not shown in the figures) configured to acquire a preset set of images; and a selecting module (not shown in the figure) configured to select a preset image from the preset image set as the image to be recognized.
In some optional implementations of this embodiment, the identifying step may further include: determining whether the preset image set comprises unselected preset images; and in response to determining that the preset image set does not comprise the unselected preset images, determining a result image corresponding to the preset image set based on the selected image to be identified.
In some optional implementations of this embodiment, the apparatus 600 may further include: a selecting unit (not shown in the figures) configured to, in response to determining that the preset image set includes the unselected preset images, reselect the preset image from the unselected preset images included in the preset image set as an image to be recognized; a second recognition unit (not shown in the figures) configured to continue the recognition step with the most recently selected image to be recognized.
In some optional implementation manners of this embodiment, determining, based on the selected image to be recognized, a result image corresponding to the preset image set includes: and according to the magnitude sequence of the quality values corresponding to the selected images to be recognized, extracting the images to be recognized from the selected images to be recognized as result images corresponding to a preset image set.
It will be understood that the elements described in the apparatus 600 correspond to various steps in the method described with reference to fig. 4. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 600 and the units included therein, and are not described herein again.
The apparatus 600 provided by the above embodiment of the present disclosure may identify the quality of the image by using the image quality identification model generated in the embodiment corresponding to fig. 2, and further may obtain a more accurate identification result, which is helpful to more accurately and objectively output the image whose quality meets the preset requirement.
Referring now to FIG. 7, a schematic diagram of an electronic device (e.g., terminal devices 101, 102, 103 or server 105 of FIG. 1) 700 suitable for use in implementing embodiments of the disclosed methods for generating models and embodiments of the methods for recognizing images is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a preset training sample set, wherein training samples in the training sample set comprise a sample image set and a sample identification result predetermined for the sample image set, the sample image set comprises a preset number of sample images, the sample identification result comprises a preset number of sample quality values, and the sample quality values in the preset number of sample quality values correspond to the sample images in the sample image set and are used for representing the quality degree of the corresponding sample images compared with other sample images in the sample image set; and training the initial model by using a machine learning method by taking a sample image set included in the training samples in the training sample set as the input of the initial model and taking a sample recognition result corresponding to the input sample image set as the expected output of the initial model, and determining the trained initial model as an image quality recognition model.
Further, the one or more programs, when executed by the electronic device, may further cause the electronic device to: acquiring an image to be identified; using the image to be recognized, performing the following recognition steps: inputting an image to be recognized into an image quality recognition model generated by adopting the method of any one of the embodiments corresponding to fig. 2, and obtaining a recognition result, wherein the recognition result comprises a quality value used for representing the quality degree of the input image to be recognized.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, a sample acquisition unit may also be described as a "unit that acquires a preset training sample set".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (16)

1. A method for generating a model, comprising:
acquiring a preset training sample set, wherein the training sample set comprises at least one training sample, the training samples in the training sample set comprise a sample image set and a sample identification result predetermined for the sample image set, the sample image set comprises a preset number of sample images, the sample identification result comprises a preset number of sample quality values, and the sample quality values in the preset number of sample quality values correspond to the sample images in the sample image set and are used for representing the quality degrees of the corresponding sample images compared with other sample images in the sample image set; wherein the preset number is a predetermined number greater than or equal to 2;
by utilizing a machine learning method, taking a sample image set included in training samples in the training sample set as input of an initial model, taking a sample recognition result corresponding to the input sample image set as expected output of the initial model, training the initial model, and determining the trained initial model as an image quality recognition model;
the method comprises the following steps of: determining a quality order of sample images in the sample image set; assigning a sample quality value to a sample image of the set of sample images based on the determined order of merits; a sample identification result corresponding to the set of sample images is generated that includes the assigned sample quality value.
2. A method for recognizing an image, comprising:
acquiring at least two images to be identified;
using the image to be recognized, performing the following recognition steps: inputting the at least two images to be recognized into the image quality recognition model generated by the method according to claim 1, and obtaining a recognition result, wherein the recognition result comprises at least two quality values corresponding to the at least two images to be recognized, and the quality values are used for representing the quality degree of the input images to be recognized.
3. The method of claim 2, wherein the method further comprises:
determining whether the size of the quality value in the obtained identification result meets a preset condition;
and in response to the fact that the quality value in the obtained identification result meets the preset condition, sending the image to be identified to a user terminal in communication connection, and controlling the user terminal to display the image to be identified.
4. The method of claim 2 or 3, wherein acquiring the image to be identified comprises:
acquiring a preset image set;
and selecting a preset image from the preset image set as an image to be identified.
5. The method of claim 4, wherein the identifying step further comprises:
determining whether the preset image set comprises unselected preset images;
and in response to determining that the preset image set does not comprise unselected preset images, determining a result image corresponding to the preset image set based on the selected image to be identified.
6. The method of claim 5, wherein the method further comprises:
in response to the fact that the preset image set comprises unselected preset images, reselecting the preset images from the unselected preset images in the preset image set as images to be identified;
and continuously executing the identification step by utilizing the image to be identified which is selected for the last time.
7. The method according to claim 6, wherein the determining, based on the selected image to be recognized, a result image corresponding to the preset image set comprises:
and extracting the image to be recognized from the selected image to be recognized as a result image corresponding to the preset image set according to the magnitude sequence of the quality value corresponding to the selected image to be recognized.
8. An apparatus for generating a model, comprising:
the image recognition system comprises a sample acquisition unit, a comparison unit and a comparison unit, wherein the sample acquisition unit is configured to acquire a preset training sample set, the training sample set comprises at least one training sample, the training samples in the training sample set comprise a sample image set and a sample recognition result predetermined for the sample image set, the sample image set comprises a preset number of sample images, the sample recognition result comprises a preset number of sample quality values, and the sample quality values in the preset number of sample quality values correspond to the sample images in the sample image set and are used for representing the quality degree of the corresponding sample images compared with other sample images in the sample image set; wherein the preset number is a predetermined number greater than or equal to 2;
a model training unit configured to train an initial model by using a sample image set included in training samples in the training sample set as an input of the initial model and using a sample recognition result corresponding to the input sample image set as an expected output of the initial model by using a machine learning method, and determine the trained initial model as an image quality recognition model;
the method comprises the following steps of: determining a quality order of sample images in the sample image set; assigning a sample quality value to a sample image of the set of sample images based on the determined order of merits; a sample identification result corresponding to the set of sample images is generated that includes the assigned sample quality value.
9. An apparatus for recognizing an image, comprising:
an image acquisition unit configured to acquire the at least two images to be recognized;
a first recognition unit configured to perform, using the image to be recognized, the following recognition steps: inputting the at least two images to be recognized into the image quality recognition model generated by the method according to claim 1, and obtaining a recognition result, wherein the recognition result comprises at least two quality values corresponding to the at least two images to be recognized, and the quality values are used for representing the quality degree of the input images to be recognized.
10. The apparatus of claim 9, wherein the apparatus further comprises:
a determination unit configured to determine whether a magnitude of a quality value in the obtained recognition result satisfies a preset condition;
a sending unit configured to send the image to be recognized to a user terminal of a communication connection in response to determining that the magnitude of the quality value in the obtained recognition result satisfies the preset condition, and control the user terminal to display the image to be recognized.
11. The apparatus according to claim 9 or 10, wherein the image acquisition unit comprises:
an acquisition module configured to acquire a preset image set;
and the selecting module is configured to select a preset image from the preset image set as an image to be identified.
12. The apparatus of claim 11, wherein the identifying step further comprises:
determining whether the preset image set comprises unselected preset images;
and in response to determining that the preset image set does not comprise unselected preset images, determining a result image corresponding to the preset image set based on the selected image to be identified.
13. The apparatus of claim 12, wherein the apparatus further comprises:
a selecting unit configured to, in response to determining that the preset image set includes unselected preset images, reselect a preset image from the unselected preset images included in the preset image set as an image to be recognized;
and the second identification unit is configured to continue to execute the identification step by utilizing the image to be identified which is selected last time.
14. The apparatus according to claim 13, wherein the determining, based on the selected image to be recognized, a result image corresponding to the preset image set includes:
and extracting the image to be recognized from the selected image to be recognized as a result image corresponding to the preset image set according to the magnitude sequence of the quality value corresponding to the selected image to be recognized.
15. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
16. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN201910371718.4A 2019-05-06 2019-05-06 Method and device for generating model and method and device for recognizing image Active CN110335237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910371718.4A CN110335237B (en) 2019-05-06 2019-05-06 Method and device for generating model and method and device for recognizing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910371718.4A CN110335237B (en) 2019-05-06 2019-05-06 Method and device for generating model and method and device for recognizing image

Publications (2)

Publication Number Publication Date
CN110335237A CN110335237A (en) 2019-10-15
CN110335237B true CN110335237B (en) 2022-08-09

Family

ID=68139353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910371718.4A Active CN110335237B (en) 2019-05-06 2019-05-06 Method and device for generating model and method and device for recognizing image

Country Status (1)

Country Link
CN (1) CN110335237B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132239B (en) * 2020-11-24 2021-03-16 北京远鉴信息技术有限公司 Training method, device, equipment and storage medium
CN112784778B (en) * 2021-01-28 2024-04-09 北京百度网讯科技有限公司 Method, apparatus, device and medium for generating model and identifying age and sex

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631457A (en) * 2015-12-17 2016-06-01 小米科技有限责任公司 Method and device for selecting picture
CN109118470A (en) * 2018-06-26 2019-01-01 腾讯科技(深圳)有限公司 A kind of image quality evaluating method, device, terminal and server
CN109360197A (en) * 2018-09-30 2019-02-19 北京达佳互联信息技术有限公司 Processing method, device, electronic equipment and the storage medium of image

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318562B (en) * 2014-10-22 2018-02-23 百度在线网络技术(北京)有限公司 A kind of method and apparatus for being used to determine the quality of the Internet images
CN105138962A (en) * 2015-07-28 2015-12-09 小米科技有限责任公司 Image display method and image display device
CN105635727B (en) * 2015-12-29 2017-06-16 北京大学 Evaluation method and device based on the image subjective quality for comparing in pairs
CN106296669B (en) * 2016-08-01 2019-11-19 微梦创科网络科技(中国)有限公司 A kind of image quality evaluating method and device
JP2018186439A (en) * 2017-04-27 2018-11-22 キヤノン株式会社 Information processing system, information processing apparatus, and information processing method
CN107273510B (en) * 2017-06-20 2020-06-16 Oppo广东移动通信有限公司 Photo recommendation method and related product
CN108288027B (en) * 2017-12-28 2020-10-27 新智数字科技有限公司 Image quality detection method, device and equipment
CN108109145A (en) * 2018-01-02 2018-06-01 中兴通讯股份有限公司 Picture quality detection method, device, storage medium and electronic device
CN109035246B (en) * 2018-08-22 2020-08-04 浙江大华技术股份有限公司 Face image selection method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631457A (en) * 2015-12-17 2016-06-01 小米科技有限责任公司 Method and device for selecting picture
CN109118470A (en) * 2018-06-26 2019-01-01 腾讯科技(深圳)有限公司 A kind of image quality evaluating method, device, terminal and server
CN109360197A (en) * 2018-09-30 2019-02-19 北京达佳互联信息技术有限公司 Processing method, device, electronic equipment and the storage medium of image

Also Published As

Publication number Publication date
CN110335237A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN109858445B (en) Method and apparatus for generating a model
CN109816589B (en) Method and apparatus for generating cartoon style conversion model
CN110288049B (en) Method and apparatus for generating image recognition model
CN108805091B (en) Method and apparatus for generating a model
CN109993150B (en) Method and device for identifying age
CN109740018B (en) Method and device for generating video label model
CN110009059B (en) Method and apparatus for generating a model
CN109919244B (en) Method and apparatus for generating a scene recognition model
CN110084317B (en) Method and device for recognizing images
CN109981787B (en) Method and device for displaying information
CN109829432B (en) Method and apparatus for generating information
CN107609506B (en) Method and apparatus for generating image
CN110211121B (en) Method and device for pushing model
CN110046571B (en) Method and device for identifying age
CN109977905B (en) Method and apparatus for processing fundus images
CN109862100B (en) Method and device for pushing information
CN109816023B (en) Method and device for generating picture label model
CN112149699A (en) Method and device for generating model and method and device for recognizing image
CN113395538B (en) Sound effect rendering method and device, computer readable medium and electronic equipment
CN110008926B (en) Method and device for identifying age
CN108268936B (en) Method and apparatus for storing convolutional neural networks
CN110335237B (en) Method and device for generating model and method and device for recognizing image
CN110727775B (en) Method and apparatus for processing information
CN109816670B (en) Method and apparatus for generating image segmentation model
CN117633228A (en) Model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant