Nothing Special   »   [go: up one dir, main page]

CN110110861A - Determine method and apparatus, the storage medium of model hyper parameter and model training - Google Patents

Determine method and apparatus, the storage medium of model hyper parameter and model training Download PDF

Info

Publication number
CN110110861A
CN110110861A CN201910384551.5A CN201910384551A CN110110861A CN 110110861 A CN110110861 A CN 110110861A CN 201910384551 A CN201910384551 A CN 201910384551A CN 110110861 A CN110110861 A CN 110110861A
Authority
CN
China
Prior art keywords
hyper
machine learning
parameter
parameters
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910384551.5A
Other languages
Chinese (zh)
Other versions
CN110110861B (en
Inventor
林宸
李楚鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201910384551.5A priority Critical patent/CN110110861B/en
Publication of CN110110861A publication Critical patent/CN110110861A/en
Application granted granted Critical
Publication of CN110110861B publication Critical patent/CN110110861B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present disclosure provides the technology and image processing techniques of a kind of determining model hyper parameter, is conducive to the image procossing performance of hoisting machine learning model, wherein the method for determining model hyper parameter comprises determining that the initial value of hyper parameter;According to the initial value of the hyper parameter and sample graph image set, M1 repetitive exercise is carried out to initial machine learning model by each path in parallel multiple paths, obtain each path first updates machine learning model, first based on each path in the multiple path updates the performance parameter of machine learning model, and the numerical value of the hyper parameter is updated to the first updated value;The first updated value and the sample graph image set based on the hyper parameter, the further numerical value update that machine learning model carries out M2 repetitive exercise and the hyper parameter is updated to the first of the multiple path, until reaching default cut-off condition, the final numerical value of the hyper parameter is obtained.

Description

Method and device for determining model hyper-parameters and training model and storage medium
Technical Field
The present disclosure relates to machine learning technologies, and in particular, to methods and apparatus, and storage media for determining model hyper-parameters and model training.
Background
In recent years, machine learning models such as deep neural networks have enjoyed significant success in various computer vision applications. The performance of the network is also reaching a surprising level driven by the large amount of marking data. However, the hyper-parameters of the machine learning model are mainly designed manually at present, after the hyper-parameters of the model are designed manually, the hyper-parameters are kept unchanged, the machine learning model is trained, and finally the model parameters of the machine learning model are obtained.
Disclosure of Invention
In view of the above, the present disclosure provides at least one technique for determining a hyper-parameter of a model and a model training technique.
In a first aspect, a method of determining a model hyper-parameter is provided, the method comprising: determining an initial value of the hyper-parameter; performing M1 times of iterative training on an initial machine learning model through each path in a plurality of paths in parallel according to the initial value of the hyper-parameter and the sample image set to obtain a first updated machine learning model of each path, wherein training parameters of different paths in the plurality of paths have different values obtained by sampling based on the hyper-parameter, and M1 is greater than or equal to 1 and smaller than or equal to the first value; updating the value of the hyper-parameter to a first updated value based on the performance parameters of the first updated machine learning model for each of the plurality of paths; performing M2 iterative training and further numerical updating of the hyper-parameters on the first updated machine learning model of the plurality of paths based on the first updated value of the hyper-parameters and the sample image set until a preset cutoff condition is reached, obtaining a final numerical value of the hyper-parameters, wherein M2 is greater than or equal to 1 and less than or equal to the first numerical value.
In one possible implementation, before the performing M2 iterative training and further value updating of the hyper-parameter on the first updated machine learning model of the plurality of paths, the method further includes: selecting a first target updating machine learning model from first updating machine learning models of a plurality of paths; updating model parameters of a first updated machine learning model of the plurality of paths to model parameters of the first target updated machine learning model.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation manner, the selecting a first target updated machine learning model from the first updated machine learning models of the multiple paths includes: and selecting a first target updating machine learning model from the first updating machine learning models of the paths based on the performance parameters of the first updating machine learning models of the paths.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the performing M1 iterative training on an initial machine learning model through each path of a plurality of paths in parallel according to the initial value of the hyper-parameter and the sample image set to obtain a first updated machine learning model of each path includes: performing first iterative training on the initial machine learning model through each path in a plurality of paths based on the initial value of the hyper-parameter and at least one first sample image in the sample image set to obtain a first inner loop updating machine learning model of each path; performing second iterative training on the first inner-loop updating machine learning model of each path through each path in the plurality of paths based on the initial value of the hyper-parameter and at least one second sample image in the sample image set to obtain a second inner-loop updating machine learning model of each path; updating the machine learning model based on the second inner loop of each path of the plurality of paths to obtain a first updated machine learning model of each path.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation manner, the performing, by using each path of a plurality of paths, a first iterative training on the initial machine learning model based on the initial value of the hyper-parameter and at least one first sample image in the sample image set to obtain a first inner loop updated machine learning model of each path includes: sampling for multiple times based on the initial value of the hyper-parameter to obtain a first training parameter of each path in the multiple paths; and performing first iterative training on the initial machine learning model based on the first training parameter of each path in the paths and at least one first sample image in the sample image set to obtain a first inner loop updating machine learning model of each path.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the training parameters used in the first iterative training and the second iterative training of each path are obtained by performing different sampling based on the initial values of the hyper-parameters.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation, the updating, based on the performance parameter of the first updated machine learning model of each of the plurality of paths, the value of the hyper-parameter to a first updated value includes: determining a model update parameter for each path of the plurality of paths based on a performance parameter of a first updated machine learning model for the each path; carrying out average processing on the model updating parameters of the paths to obtain average updating parameters; and updating the numerical value of the hyperparameter into a first updating value according to the average updating parameter.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation manner, before the updating the numerical value of the hyper-parameter to the first updated value based on the performance parameter of the first updated machine learning model of each of the plurality of paths, the method further includes: normalizing the performance parameters of the first updated machine learning model of each of the plurality of paths; the updating the value of the hyperparameter to a first updated value based on the performance parameters of the first updated machine learning model for each path of the plurality of paths, comprising: and updating the numerical value of the hyper-parameter to a first updated value based on the performance parameter of the first updated machine learning model of each path in the plurality of paths obtained after the normalization processing.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation, the performance parameter includes an accuracy rate.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation, the performing M2 iterative trainings and further numerical updates of the hyper-parameter on the first updated machine learning models of the plurality of paths based on the first updated value of the hyper-parameter and the sample image set includes: performing M2 iterative training on the first updated machine learning model of each path in the plurality of paths based on the first updated value of the hyper-parameter and the sample image set to obtain a second updated machine learning model of each path; updating the value of the hyperparameter to a second updated value based on the performance parameters of the second updated machine learning model for each path of the plurality of paths.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation manner, the hyper-parameter includes an enhancement distribution parameter for performing image enhancement processing on the sample image set; performing M1 iterative training on an initial machine learning model through each path in a plurality of paths in parallel according to the initial values of the hyper-parameters and the sample image set, wherein the iterative training comprises: determining enhancement probability distribution according to the enhancement distribution parameters, wherein the enhancement probability distribution comprises the probability of a plurality of image enhancement operations; based on the enhancement probability distribution, performing image enhancement processing on at least one sample image of each path by sampling target data enhancement operation of each path in the parallel paths in the data enhancement operations to obtain at least one enhanced image; performing M1 iterative training of the initial machine learning model based on the at least one enhanced image of each of the plurality of paths.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the obtaining of the performance parameter of the first updated machine learning model includes the following processing: processing at least one test image in the test image set through the first updating machine learning model of each path in the plurality of paths to obtain an image processing result; and obtaining a performance parameter of a first updated machine learning model of each path based on the image processing result corresponding to each path in the plurality of paths.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation, the preset cutoff condition includes at least one of: updating the super-parameter for a preset updating time; or the performance of the updated machine learning model obtained by the plurality of paths reaches the target performance.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation, the method further includes: and selecting a target machine learning model from the finally updated machine learning models of the paths obtained under the condition that the preset cutoff condition is reached, wherein the target machine learning model is a trained machine learning model for image processing.
In a second aspect, there is provided an apparatus for determining a model hyper-parameter, the apparatus comprising: the initialization module is used for determining the initial value of the hyper-parameter; a model training module, configured to perform M1 iterative training on an initial machine learning model through each path of multiple paths in parallel according to the initial value of the hyper-parameter and the sample image set, to obtain a first updated machine learning model of each path, where training parameters of different paths in the multiple paths have different values obtained by sampling based on the hyper-parameter, and M1 is greater than or equal to 1 and less than or equal to the first value; a hyper-parameter updating module for updating a value of the hyper-parameter to a first updated value based on the performance parameter of the first updated machine learning model for each of the plurality of paths; and the hyper-parameter acquisition module is used for performing M2 times of iterative training and further value updating of the hyper-parameters on the first updated machine learning models of the paths based on the first updated values of the hyper-parameters and the sample image set until a preset cut-off condition is reached to obtain final values of the hyper-parameters, wherein M2 is greater than or equal to 1 and less than or equal to the first values.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the super-parameter obtaining module is further configured to: before the performing M2 times of iterative training and further value updating of the hyper-parameter on the first updated machine learning models of the plurality of paths, selecting a first target updated machine learning model from the first updated machine learning models of the plurality of paths; updating model parameters of a first updated machine learning model of the plurality of paths to model parameters of the first target updated machine learning model.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the super-parameter obtaining module, when configured to select a first target update machine learning model from the first update machine learning models of the multiple paths, includes: and selecting a first target updating machine learning model from the first updating machine learning models of the paths based on the performance parameters of the first updating machine learning models of the paths.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the model training module is specifically configured to: performing first iterative training on the initial machine learning model through each path in a plurality of paths based on the initial value of the hyper-parameter and at least one first sample image in the sample image set to obtain a first inner loop updating machine learning model of each path; performing second iterative training on the first inner-loop updating machine learning model of each path through each path in the plurality of paths based on the initial value of the hyper-parameter and at least one second sample image in the sample image set to obtain a second inner-loop updating machine learning model of each path; updating the machine learning model based on the second inner loop of each path of the plurality of paths to obtain a first updated machine learning model of each path.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation manner, the model training module, when being configured to obtain the first inner loop update machine learning model of each path, includes: sampling for multiple times based on the initial value of the hyper-parameter to obtain a first training parameter of each path in the multiple paths; and performing first iterative training on the initial machine learning model based on the first training parameter of each path in the paths and at least one first sample image in the sample image set to obtain a first inner loop updating machine learning model of each path.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the training parameters used in the first iterative training and the second iterative training of each path are obtained by performing different sampling based on the initial values of the hyper-parameters.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the super-parameter updating module is specifically configured to: determining a model update parameter for each path of the plurality of paths based on a performance parameter of a first updated machine learning model for the each path; averaging the network updating parameters of the paths to obtain average updating parameters; and updating the numerical value of the hyperparameter into a first updating value according to the average updating parameter.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the super-parameter updating module is specifically configured to: normalizing the performance parameters of the first updated machine learning model of each of the plurality of paths; and updating the numerical value of the hyper-parameter to a first updated value based on the performance parameter of the first updated machine learning model of each path in the plurality of paths obtained after normalization processing.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation, the performance parameter includes an accuracy rate.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the super-parameter obtaining module is specifically configured to: performing M2 iterative training on the first updated machine learning model of each path in the plurality of paths based on the first updated value of the hyper-parameter and the sample image set to obtain a second updated machine learning model of each path; updating the value of the hyperparameter to a second updated value based on the performance parameters of the second updated machine learning model for each of the plurality of paths.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation manner, the hyper-parameter includes an enhancement distribution parameter for performing image enhancement processing on the sample image set; the model training module is specifically configured to: determining enhancement probability distribution according to the enhancement distribution parameters, wherein the enhancement probability distribution comprises the probability of a plurality of image enhancement operations; based on the enhancement probability distribution, performing image enhancement processing on at least one sample image of each path by sampling target data enhancement operation of each path in the parallel paths in the data enhancement operations to obtain at least one enhanced image; performing M1 iterative training of the initial machine learning model based on the at least one enhanced image of each of the plurality of paths.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the super-parameter updating module is further configured to obtain performance parameters of the first updated machine learning model, and includes the following processing: processing at least one test image in the test image set through the first updating machine learning model of each path in the plurality of paths to obtain an image processing result; and obtaining a performance parameter of a first updated machine learning model of each path based on the image processing result corresponding to each path in the plurality of paths.
In combination with any one of the embodiments provided by the present disclosure, in one possible implementation, the preset cutoff condition includes at least one of: updating the super-parameter for a preset updating time; or the performance of the updated machine learning model obtained by the plurality of paths reaches the target performance.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the hyper-parameter obtaining module is further configured to select a target machine learning model from the finally updated machine learning models of the multiple paths obtained when the preset cutoff condition is reached, where the target machine learning model is a trained machine learning model used for image processing.
In combination with any one of the embodiments provided by the present disclosure, in a possible implementation manner, the hyper-parameter obtaining module is further configured to train an initial machine learning model of the initialization model parameters based on the final values of the hyper-parameters after obtaining the final values of the hyper-parameters, so as to obtain a trained target machine learning model.
In a third aspect, a method for training a machine learning model is further provided, including: the method for determining the hyper-parameters of the machine learning model in any of the above embodiments obtains the final values of the hyper-parameters, and trains the initial machine learning model having the initial model parameters based on the final values of the hyper-parameters to obtain the target machine learning model.
The embodiment of the disclosure also provides a device for realizing the training method.
In a fourth aspect, there is provided an image processing method, the method comprising: acquiring an image to be processed; and processing the image to be processed by utilizing a machine learning model to obtain an image processing result, wherein the hyper-parameters of the machine learning model are obtained by training the hyper-parameters determined by the method for determining the hyper-parameters of the model according to any embodiment of the disclosure.
In a fifth aspect, an electronic device is provided, which includes a memory for storing computer instructions executable on a processor, and the processor is configured to implement the method for determining hyper-parameters of a model or the training method of a machine learning model according to any embodiment of the present disclosure when executing the computer instructions.
In a sixth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the method for determining hyper-parameters of a model or the training method of a machine learning model according to any of the embodiments of the present disclosure.
In a seventh aspect, a training system is provided, including: the system comprises a parameter management server and a controller, wherein the parameter management server is used for managing and updating the numerical value of the hyper-parameter. The controller is used for circularly or iteratively updating the machine learning model based on the hyper-parameters and feeding back the performance parameters serving as the basis for updating the hyper-parameters to the parameter management server so that the parameter management server updates the hyper-parameters accordingly.
According to the technology for determining the hyper-parameters of the model, provided by the embodiment of the disclosure, the machine learning model is subjected to iterative training based on a plurality of paths, the hyper-parameters are updated after the model is subjected to M1 times of iterative training, the model is continuously subjected to iterative training and further updating of the values of the hyper-parameters based on the updated hyper-parameters, the hyper-parameters of the machine learning model are determined in a mode of taking value updating-performance inspection as a cyclic unit, the optimization efficiency of the hyper-parameters is accelerated, and the hyper-parameters are updated based on the performance parameters of the machine learning model, so that the search of the hyper-parameters can use the performance parameters of the machine learning model as an optimization direction, and the performance of the machine learning model based on the determined hyper-parameters is improved.
Drawings
In order to more clearly illustrate one or more embodiments of the present disclosure or technical solutions in related arts, the drawings used in the description of the embodiments or related arts will be briefly described below, it is obvious that the drawings in the description below are only some embodiments described in one or more embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive exercise.
FIG. 1 illustrates a method of determining a model hyper-parameter provided by at least one embodiment of the present disclosure;
FIG. 2 illustrates another method of determining a model hyper-parameter provided by at least one embodiment of the present disclosure;
FIG. 3 illustrates an application scenario example of a method for determining a model hyper-parameter provided by at least one embodiment of the present disclosure;
FIG. 4 illustrates yet another method of determining a model hyper-parameter provided by at least one embodiment of the present disclosure;
FIG. 5 illustrates a model training process provided by at least one embodiment of the present disclosure;
FIG. 6 illustrates an apparatus for determining a hyper-parameter of a model provided by at least one embodiment of the present disclosure;
fig. 7 illustrates a flow of an image processing method provided by at least one embodiment of the present disclosure;
FIG. 8 illustrates a method of training a machine learning model provided by at least one embodiment of the present disclosure;
fig. 9 illustrates a training apparatus for a machine learning model according to at least one embodiment of the present disclosure.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in one or more embodiments of the present disclosure, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by one of ordinary skill in the art based on one or more embodiments of the disclosure without inventive faculty are intended to be within the scope of the disclosure.
The method for determining the hyper-parameters of the model and the model training method provided by the embodiment of the disclosure can be applied to a training platform of a machine learning model, such as a cloud training platform or an end training platform, wherein the training platform can include one or more devices, and accordingly, the method can be executed by a cloud device, a network device or a terminal device, and the like, and the embodiment of the disclosure does not limit the method. For ease of understanding, the method is described below by way of example as being a training apparatus.
In the embodiment of the disclosure, based on a reasonably designed search space and an efficient search mode, reasonable hyper-parameters are automatically searched, and the performance of a machine learning model based on the hyper-parameters can be improved.
In some embodiments, the process of determining the hyper-parameter includes multiple cycles, where in each cycle, the training device samples based on a current value of the hyper-parameter to obtain a training parameter used by each of the multiple paths, adjusts a model parameter of the initial machine learning model of the current cycle based on the training parameter to obtain an updated machine learning model, and then updates the value of the hyper-parameter based on a performance parameter of the updated machine learning model in the multiple paths, where the updated hyper-parameter is used for determining the training parameters of the multiple paths in a next cycle.
In the embodiment of the present disclosure, the machine learning model may be a neural network or other model trained based on hyper-parameters, which is not limited by the embodiment of the present disclosure.
In some embodiments, the hyper-parameter may be a parameter for obtaining a loss function, or include a parameter for data enhancement, etc., which is not limited by the embodiments of the present disclosure.
The embodiment of the disclosure provides a method for determining a model hyperparameter, which aims to quickly search a hyperparameter with better performance, and combines a search process of the hyperparameter with a training process of the model to be carried out simultaneously. Referring to fig. 1, fig. 1 illustrates a method for determining a hyper-parameter of a model according to at least one embodiment of the present disclosure.
In step 100, an initial value of a hyper-parameter is determined.
Optionally, steps 100 to 104 may correspond to the first loop flow in all loops, and accordingly, in step 100, a hyper-parameter may be initialized, for example, an initial value is given to the hyper-parameter. Or, the steps 100 to 104 may correspond to a certain intermediate cycle flow in all cycles, and accordingly, the value of the hyper-parameter obtained in the previous cycle is used as the initial value of the hyper-parameter of the current cycle, which is not limited in the embodiment of the present disclosure.
In step 102, according to the initial value of the hyper-parameter and the sample image set, performing M1 times of iterative training on the initial machine learning model through each path in a plurality of parallel paths to obtain a first updated machine learning model of each path.
Alternatively, the machine learning model may be iteratively trained in parallel through multiple paths, for example, the same machine learning model is distributed to multiple processing units or threads, resulting in multiple machine learning models in the multiple processing units or threads, and the processing object of each machine learning model may be at least one sample image in the sample image set.
The training device may sample based on the hyper-parameters to obtain training parameters for the model, where the sampling may be performed multiple times to obtain training parameters for each of the multiple paths. The training parameters for different ones of the plurality of paths may be differently sampled based on the hyper-parameter, and accordingly, different paths may have different values of the training parameters.
After the training parameters are obtained, the machine learning model is iteratively trained by combining the training parameters and at least one sample image.
For convenience of understanding, the model before the iterative training in the current loop may be referred to as an initial machine learning model, and the model obtained after the iterative training may be referred to as a first updated machine learning model. The number of iterative training of the model may be M1 times, and M1 may be greater than or equal to 1 and less than or equal to the first value. Specifically, the number of iterative training may be one or more times, for example, the value of M1 is 50 or 30 or other values, which may be preset or determined based on the current performance of the machine learning model, or determined by other means, and accordingly, the number of iterative training in different loops may be the same or different, which is not limited by the embodiment of the present disclosure.
In addition, in a cycle, multiple paths may perform iterative training on the machine learning model for the same number of times, or different paths may also perform iterative training on the machine learning model for different numbers of times, which is not limited in this disclosure.
In some embodiments, assuming that M1 is an integer greater than 1, in each path, the processing unit or at least one thread may perform one iteration on the initial machine learning model based on the training parameters to obtain a network parameter adjusted machine learning model, and use the network parameter adjusted machine learning model as an input of a next iteration, wherein in the next iteration, another training parameter may be obtained by sampling based on the hyper-parameter, and thus M1 iterations are performed repeatedly to obtain a first updated machine learning model of the path.
In addition, as the multiple paths are parallel, the number of paths is not limited by the embodiments of the present disclosure. Experience experiments show that the performance of model training may be influenced by adopting different path numbers, and the performance of the model can be improved by increasing the path number within a certain number range; the impact on the performance of the model training may also be reduced after the number of paths reaches a certain value. The number of paths selected may be determined based on a combination of model performance and computational resource consumption considerations.
In step 104, the value of the hyper-parameter is updated to a first updated value based on the performance parameters of the first updated machine learning model for each of the plurality of paths.
The updating of the hyper-parameters may be based on performance parameters of a first updated machine learning model in the plurality of paths, and in some embodiments, the updating of the hyper-parameters is performed based on a reinforcement learning manner, for example, the purpose of updating the hyper-parameters is to enable a model with better performance parameters to be trained based on the hyper-parameters.
The training device may refer to the updated value of the hyper-parameter as the first updated value. Thus, one cycle of the hyper-parameter update is completed.
In step 106, based on the first updated value of the hyper-parameter and the sample image set, performing M2 iterative training and further value updating of the hyper-parameter on the first updated machine learning models of the plurality of paths until a preset cutoff condition is reached, and obtaining a final value of the hyper-parameter.
Optionally, after updating the hyper-parameters at step 104, the iterative updating of the machine learning model in the plurality of paths may continue based on the first updated value of the hyper-parameters and the sample image set. For example, the number of iterative training for the first updated machine learning model may be M2, the M2 may be greater than or equal to 1 and less than or equal to the first value, and M2 and M1 may be the same or different.
As the machine learning model continues to be iteratively trained, the values of the hyper-parameters will be further updated, and likewise, the values of the hyper-parameters may continue to be updated based on the performance parameters of the machine learning model after continuing to be iteratively trained, for example. Alternatively, when a preset cutoff condition is reached, a final value of the hyperparameter may be obtained, which is the better value of the hyperparameter searched for. For example, the preset cutoff condition may be that the number of updates to the hyper-parameter reaches a preset number of updates, or that the performance of the updated machine learning model obtained by the plurality of paths reaches a target performance.
In some embodiments, the method for determining the hyper-parameters of the model includes iteratively training the machine learning model based on the multiple paths, updating the hyper-parameters after M1 iterative trainings are performed on the model, and continuing to iteratively train the model through the multiple paths and further update the values of the hyper-parameters based on the updated hyper-parameters, so that the hyper-parameters of the machine learning model are determined in a manner of using a value update-performance test as a cyclic unit, thereby accelerating the optimization efficiency of the hyper-parameters, and the search of the hyper-parameters can be performed in an optimization direction based on the performance parameters of the machine learning model by updating the hyper-parameters based on the performance parameters of the machine learning model, thereby improving the performance of the machine learning model based on the determined hyper-parameters.
Fig. 2 illustrates another method for determining model hyper-parameters according to at least one embodiment of the present disclosure, which describes a more detailed process for searching model hyper-parameters. As shown in fig. 2, the method may include:
in step 100, an initial value of a hyper-parameter is determined.
In step 102, according to the initial value of the hyper-parameter and the sample image set, performing M1 times of iterative training on the initial machine learning model through each path in a plurality of parallel paths to obtain a first updated machine learning model of each path.
Optionally, performing M1 iterative training on the initial machine learning model of each path to obtain a first updated machine learning model, which may include the following processes:
for example, a model obtained after performing a first iterative training on the initial machine learning model based on the initial value of the hyper-parameter and at least one first sample image in the sample image set may be referred to as a first inner loop updated machine learning model. In the process of obtaining the first inner-loop updated machine learning model, multiple sampling may be performed based on an initial value of a hyper-parameter to obtain a first training parameter of each of a plurality of paths, and then, first iterative training is performed on the initial machine learning model based on the first training parameter and a sample image set to obtain the first inner-loop updated machine learning model of each path.
Then, a second iterative training may be performed on the first inner-loop updated machine learning model of each path through each path of the plurality of paths based on the initial value of the hyper-parameter and at least one second sample image in the sample image set, and the obtained updated model may be referred to as a second inner-loop updated machine learning model.
And updating the machine learning model based on the second inner ring of each path in the plurality of paths, and continuing iterative updating until the first updated machine learning model of each path is obtained. The training parameters adopted in the first iteration training and the second iteration training of each path are obtained by carrying out different sampling based on the initial values of the hyper-parameters.
In step 104, the value of the hyper-parameter is updated to a first updated value based on the performance parameters of the first updated machine learning model for each of the plurality of paths.
In some embodiments, the model-based performance parameter update hyper-parameters may include: determining a model update parameter for each of the plurality of paths based on a performance parameter of a first updated machine learning model for the each path. And averaging the model updating parameters of the paths to obtain average updating parameters. And updating the numerical value of the hyper-parameter to a first updated value according to the average updated parameter.
The performance parameters of the model are obtained, and at least one test image in the test image set can be processed through the first updating machine learning model of each path in the paths to obtain an image processing result; and obtaining the performance parameters of the first updated machine learning model of each path based on the image processing results of the plurality of paths. For example, the performance parameter may be the accuracy of the model.
Further, before updating the value of the hyper-parameter to the first updated value, the performance parameter of the first updated machine learning model for each of the plurality of paths may be normalized. And updating the numerical value of the hyper-parameter to a first updated value based on the performance parameter obtained after the normalization processing.
In step S106, a first target updated machine learning model is selected from the first updated machine learning models of the plurality of paths, and the model parameters of the first updated machine learning model of the plurality of paths are updated to the model parameters of the first target updated machine learning model.
Optionally, the model parameters of the machine learning models of the multiple paths may be unified before performing the next iterative training of the model. For example, one model may be selected from the plurality of first updated machine learning models as the first target updated machine learning model based on the model parameters of the first updated machine learning model of the plurality of paths, and the model parameters of the models of the plurality of paths may all be updated to the model parameters of the first target updated machine learning model. Illustratively, the model performance parameters include, but are not limited to, the accuracy of the model on the validation set.
In step 106, based on the first updated value of the hyper-parameter and the sample image set, performing M2 iterative training and further value updating of the hyper-parameter on the first updated machine learning models of the plurality of paths until a preset cutoff condition is reached, and obtaining a final value of the hyper-parameter.
For example, in further iterative training of the first updated machine learning model, the first updated machine learning model for each of the plurality of paths may be iteratively trained M2 times based on the first updated value of the hyper-parameter and the sample image set, resulting in the second updated machine learning model for each of the paths. Similarly, the value of the hyper-parameter is updated to a second updated value based on the performance parameter of the second updated machine learning model for each of the plurality of paths.
As described above, the updating of the hyper-parameters and the updating of the model parameters are performed simultaneously, and in the training process of the model, the performance parameters of the feedback model training are performed in stages, and the hyper-parameters are updated based on the performance parameters.
Furthermore, after obtaining the optimized hyper-parameters, there may be two cases:
for example, a trained model may be obtained simultaneously. And selecting a target machine learning model from the finally updated machine learning models of the paths obtained under the condition that the preset cutoff condition is reached, wherein the target machine learning model is a trained machine learning model for image processing.
For another example, the model may be retrained as a final trained machine learning model using the final optimized hyper-parameters. Based on the final value of the hyper-parameter, training an initial machine learning model of the initialized model parameter to obtain a trained target machine learning model.
In the method for determining the hyper-parameters of the model in some embodiments, the hyper-parameters are updated simultaneously in the training process of the machine learning model, so that the search efficiency of the hyper-parameters is accelerated; in addition, the method takes the performance parameters of the machine learning model as optimization basis when updating the hyper-parameters, thereby ensuring the effect of the hyper-parameters and realizing fast and good search of the superior hyper-parameters.
The method for determining the model hyperparameters is suitable for search optimization of hyperparameters of any machine learning. The implementation of the method will be described below by taking one of the hyper-parameters, which is an enhancement distribution parameter for performing image enhancement processing on the sample image set, as an example, but it is understood that the method is not limited to optimization of the enhancement distribution parameter.
The data enhancement strategy can be applied to the training process of the network, and the overfitting problem in the network training process can be improved by using the data enhancement strategy to perform data enhancement on the input data of the network. For example, data enhancement such as rotation, shearing, movement and the like can be performed on input image data, and new training data can be obtained for training the network, so that the generalization capability of the network can be improved, and the accuracy of network prediction can be improved.
And the used data enhancement strategies are different, and the training effect of the network is also different. For example, with a certain data enhancement strategy, the generalization capability of the trained network is slightly poor, and the accuracy of the network on the verification set is also low. And the accuracy of the trained network can be improved by using another data enhancement strategy. Therefore, searching a better data enhancement strategy plays an important role in training a network with better performance.
The embodiment of the present disclosure provides an optimization method for a data enhancement policy, which is a method for automatically searching a better data enhancement policy, and the method will be described in detail as follows:
first, for the sake of clarity of the description of the method, some basic contents are explained:
search Space (Search Space) and data enhancement operations: during network training, which data enhancement operations are performed on network input data may be some preset data enhancement operations, and the preset operations are selected for use, so that the preset operations may be referred to as "search spaces".
For example, when input data of a network is an image, a plurality of kinds of processing such as rotation, color adjustment, clipping, and movement may be performed on the image, individual processing such as "rotation" and "movement" may be referred to as a data enhancement element (an enhancement operation), and a combination of two kinds of data enhancement elements may be referred to as one "data enhancement operation" in the present disclosure. Assuming that the number of data enhancement elements is 36, the number of possible pairwise combinations, i.e., the number of data enhancement operations K, is 362I.e. K data enhancement operations are included in the search space. For each image input by the network, a data enhancement operation can be selected from the search space and applied to the image for data enhancement.
Among them, in some embodiments, data enhancement elements include, but are not limited to: horizontal shear (horizon shear), vertical shear (vertical shear), horizontal shift (horizon transform), vertical shift (vertical transform), rotation (Rotate), hue adjustment (ColorAdjust), hue separation (postize), exposure process (Solarize), Contrast process (Contrast), sharpening process (Sharpness), highlight process (Brightness), auto Contrast (autocontrost), hue equalization (Equalize), inversion process (invers), and the like.
Enhancement distribution parameter (θ) and probability distribution (p)θ): the enhancement distribution parameter may be a value, each data enhancementThe operation may correspond to a value of an enhanced distribution parameter. The probability distribution is obtained by converting the enhancement distribution parameters according to the enhancement distribution parameters, and the probability distribution is obtained by converting the value of the enhancement distribution parameters corresponding to each data enhancement operation into a value between 0 and 1, namely, converting the value into a probability. And the sum of the probabilities corresponding to all data enhancement operations in the search space is 1. Illustratively, the probability distribution may be {0.1, 0.08, 0.32 … … }, for a total of K probabilities, the sum of which equals 1, each of which may represent a probability value that the corresponding data enhancement operation was sampled with.
In the embodiment of the present disclosure, the enhanced distribution parameter may be used as a hyper-parameter (hyper-parameter) during network training, and may be optimized simultaneously with the network parameter along with the training process of the network. In addition, one of the probability distributions can be regarded as a data enhancement strategy (an augmentation policy) because the training data of the network is subjected to data enhancement operation based on sampling of the probability distribution, and when the probability distribution changes, the sampled data enhancement operation changes accordingly.
Fig. 3 illustrates an application scenario example of a method for determining a model hyper-parameter provided in at least one embodiment of the present disclosure. The application scenario provides a training system, which may include: a parameter management server 11 and a controller 12. Wherein,
and the parameter management server 11 is used for managing and updating the numerical value of the hyper-parameter.
The controller 12 includes a plurality of machine learning models, the plurality of machine learning models perform loop or iterative update based on the hyper-parameters, and obtain performance parameters of the trained models, and the controller 12 feeds back the performance parameters to the parameter management server, so that the parameter management server performs update of the hyper-parameters accordingly. The controller 12 may also continue the training of the model based on the updated hyper-parameters of the parameter management server 11.
Referring to FIG. 3, a frame that may be employed with the training system is illustratedIn the form of a structure, the system adopts a double-layer structure for online optimization, and the double-layer structure can enable the enhancement distribution parameters to be optimized simultaneously with the network parameters. The two-tier architecture includes a parameter management server 11 and a controller 12. The parameter management server 11 is responsible for storing and updating the enhanced distribution parameters and obtaining the probability distribution of data enhancement operation based on the enhanced distribution parameters,. A set of networks to be trained (networks) may be included in the controller 12, assuming there are N networks in total, for example, N may be any of 4 to 16. The network structure of the network is not limited in this disclosure, and may be, for example, CNN (Convolutional neural networks).
The entirety of the parameter management server 11 and the controller 12 may be regarded as an Outer Loop (Outer Loop) that operates in a reinforcement learning manner. The outer loop may iterate over T time steps (time steps) to update the enhanced distribution parameters maintained by the parameter management server 11θThe enhanced distribution parameter is used as a hyper-parameter, and updating the enhanced distribution parameter can be used as the behavior (action) of the parameter management server 11 in the reinforcement learning training process. And the accuracy of the network on the verification set obtained by the controller 12 based on the training of the enhanced distribution parameters is used as a returned Reward value (Reward) in the process of the enhanced learning training, the enhanced distribution parameters are updated according to the Reward, and the Reward is optimized after iterating for T time steps, namely the highest accuracy is the target direction of the enhanced learning. The T is 1,2, … … Tmax
And the N networks to be trained in the controller 12 may be Inner loops (Inner loops), which may run in parallel. The training of each network may be based on a data enhancement operation used by the parameter management server 11 in the enhanced distributed parameter sampling of the outer loop update, and network training is performed based on the data enhanced data. The training of the network parameters may use a Stochastic Gradient Descent (SGD) method. Each network may iterate I times (I ═ 1,2, …. I), and the accuracy of the network iterated I times on the validation set is used to update the enhanced distribution parameters as the return Reward value Reward described above.
Fig. 4 illustrates yet another method for determining model hyper-parameters provided by at least one embodiment of the present disclosure, and in the following description of embodiments, the parallel multiple paths may include parallel multiple networks, which may be machine learning models. Also, in this example, the hyper-parameter is an enhancement distribution parameter for performing image enhancement processing on the sample image set. As shown in connection with fig. 4, the method may include the following processing steps:
in step 400, a plurality of networks in parallel each perform enhancement operation sampling based on the initialized enhancement distribution parameters, and obtain data enhancement operations used by the networks.
Alternatively, multiple networks included in the controller 12 may be trained in parallel.
Each of the networks has initialized network parameters and may manage the enhanced probability distribution maintained by the server 11 based on the parametersData enhancement operations that sample input data applied to the network. The data enhancement operations for different network samples may be different. This sampled data enhancement operation may be referred to as a target data enhancement operation.
In step 402, each network of the plurality of networks performs data enhancement on the input data of the network according to the data enhancement operation, and performs network training using the enhanced data to obtain updated network parameters.
Optionally, after data enhancement is performed on each network, network training is performed by using the enhanced data, so that updated network parameters can be obtained. The network training of this step may be iterated a preset number of times.
For example, the data enhancement operation may include performing the following processing on the image: rotation, color adjustment, horizontal clipping, vertical clipping, horizontal movement, vertical movement, tone separation, exposure processing, and the like. The target data enhancement operations may be utilized to perform image enhancement processing on at least one sample image of each path, resulting in at least one enhanced image. And, the initial network may be trained for M1 iterations based on the at least one enhanced image for each of the plurality of paths.
In step 404, the accuracy of each of the trained networks on the validation set is obtained.
Optionally, the performance parameter of the model is accuracy, for example.
For example, the accuracy (accuracycacy) of the network trained in step 404 may be verified by verifying the data set. In the N networks of the controller 12, the data enhancement operations used for sampling different networks are different, and correspondingly, the effect of the trained network is also different, and the accuracy of each network may be different.
In step 406, the enhanced distribution parameters are updated based on the accuracy rates of the plurality of networks through a reinforcement learning algorithm.
Optionally, the enhanced distribution parameters maintained by the parameter management server 11 are updated according to an enhanced learning algorithm based on the accuracy of the trained multiple networks. The optimization of the accuracy can be used as a target direction of reinforcement learning, and the action (i.e. the updating of the enhanced distribution parameters) is updated as rewarded based on the accuracy.
In step 408, the network parameters of the network with the highest accuracy are applied to the plurality of networks to obtain a new network for the next iteration, and updating of a time step of reinforcement learning is completed.
Alternatively, the network with the highest accuracy on the verification set among the plurality of networks of the controller 12 may be determined, and the network parameters of the network with the highest accuracy may be synchronized to all networks, i.e., the synchronized network parameters are applied to the network parameters of all networks in the controller 12.
The sum of the network parameters is synchronized, the multiple networks in the controller 12 may be referred to as new networks, and may proceed to the next time step update of reinforcement learning.
In step 410, the new networks continue to iteratively update at the next time step of reinforcement learning based on the updated enhancement distribution parameters.
The iterative updating of this step is to repeatedly execute the foregoing steps 400 to 408, for example, the parameter management server 11 obtains an updated probability distribution based on the updated enhancement distribution parameters, and each network in the controller 12 may sample the data enhancement operation to be used based on the updated probability distribution. And training the network, verifying the accuracy, updating the enhanced distribution parameters and the like based on the enhanced data, and detailed description is omitted.
In step 412, when the preset update time step is reached, the network with the highest accuracy on the verification set is determined as the network obtained by the final training, and the enhanced distribution parameter updated at the last time step is obtained, so as to determine the data enhancement operation to be taken according to the enhanced distribution parameter.
The reaching of the preset update time step is an exemplary preset model cutoff condition.
The reinforcement learning training may be preset to update the number of time steps, e.g., the time steps are up to update TmaxThen when T is reachedmaxAnd in time, the network with the highest accuracy can be used as the final network, and the finally updated enhanced distribution parameters are used as the optimized hyper-parameters. Based on the hyper-parameter, probability distribution can be obtained, and training data can be enhanced by sampling data enhancement operation in the network training process according to the probability distribution.
In the method for determining the model hyper-parameters in some embodiments, the enhanced distribution parameters are used as the hyper-parameters for network training, and in the training process of the network, a mode of updating the hyper-parameters by means of staged feedback of reinforcement learning is adopted, so that the optimization of the hyper-parameters and the optimization of the network parameters are performed simultaneously, and the search efficiency of the hyper-parameters is remarkably improved; moreover, the hyper-parameters are trained in a reinforcement learning mode, the accuracy of a verification set of the network is used as an optimization target, and the updating of the hyper-parameters takes the optimal accuracy as a training direction, so that the obtained hyper-parameters are more accurate and have better effect.
In addition, it should be noted that, in the conventional hyper-parameter optimization manner, training of a Network (Network) needs to be trained to the bottom once, which is similar to that in the frame diagram of fig. 3, the Network from the initialized Network to the Final Network is obtained by training once, and then a better data enhancement strategy is searched based on the trained Network, so that the whole search process is very time-consuming, and a large number of networks need to be trained to obtain a search result of the Final data enhancement strategy, which is very high in time cost and calculation cost. The optimization method disclosed by the invention is equivalent to interrupting the one-time training process of the Network, increasing a plurality of staged feedbacks in the process from the initialized Network to the Final Network, feeding back once after Network iteration for a certain number of times, updating the enhanced distribution parameters based on the staged feedbacks by using a reinforcement learning mode, simultaneously carrying out the optimization of the hyper-parameters and the optimization of the Network parameters by the mode, greatly accelerating the optimization efficiency of the hyper-parameters, obviously reducing the time cost, taking the accuracy of the Network as the optimization target direction for the optimization of the hyper-parameters, having better effect, searching better hyper-parameters quickly and well, and greatly reducing the number of trained networks compared with the traditional mode.
In this embodiment, a better data enhancement policy adopted for the input image is searched by taking data enhancement operations such as rotation, shearing, translation and the like for the input image as an example, so that the network performance of the network obtained by performing enhanced data training according to the better policy is better.
First, in initializing the framework shown in fig. 4, N networks (networks) are included in the controller 12, the N networks having the same initialization Network parameter ω', and the enhancement distribution parameter θ of each data enhancement operation in the search space is initialized. The parameter management server 11 can obtain the probability distribution p θ of the data enhancement operation based on the enhancement distribution parameter.
In addition, it can be set that, for each network in the inner ring, during network training, it can be set that each time step (at reach time step) of reinforcement learning is iteratively updated I times, and then the update of the reinforcement distribution parameters of the outer ring is performed once. The iteration of reinforcement learning of the outer loop may set a common iteration TmaxSecond, i.e. iteration TmaxAnd finally obtaining the optimized network and the optimized hyper-parameters.
Fig. 5 illustrates a training process of one of the networks, wherein in the network training, each iteration of updating the network parameters is input data of the network through a group of B input images, and each iteration can be regarded as one packet, that is, the network is iteratively trained through a plurality of packets.
In step 500, a target data enhancement operation to be used by the sampling of the plurality of data enhancement operations is enhanced based on an enhancement probability distribution of the data enhancement policy.
For example, in training with one set of input images, the set of data includes B input images. For each input image, a data enhancement operation, which may be referred to as a target data enhancement operation, may be sampled based on the enhancement probability distribution. Moreover, these sampled data enhancement operations can be regarded as training parameters of a model based on hyper-parametric sampling, and the data enhancement operations obtained by network sampling of different paths can be different.
For example, for one of the input images, the sampled data enhancement operation may include two data enhancement elements "rotate" and "cut". The data enhancement operation can be applied to the input image for enhancement to obtain an enhanced image.
In step 502, network training is performed using the enhanced data to obtain updated network parameters.
Optionally, the enhanced data may be an enhanced image. The network training in this step can be iterated for many times, and updated network parameters are finally obtained.
When the network parameters are updated, a gradient descent method can be used, and the network parameters are optimized on the principle that the loss function is minimum. The following formula (1) illustrates an updating method of network parameters:
as in equation (1) above, the network parameters are updated in relation to the following parameters:
ηwis the learning rate of the network parameters;
is the loss function value at the time of the group iteration, which is based on the data enhancement operation of the current iteration sampleCurrent network parametersAnd input data x for the current iterationB,yBAnd (4) determining.
In step 504, it is determined whether the number of iterations of the network reaches a preset number of iterations.
As mentioned before, at each time step of reinforcement learning, each network in the controller iteratively updates I times, i.e. network parameter updates by I packets, can be set.
Optionally, if it is determined that the iteration number I of the network is smaller than the preset iteration number I, the process returns to step 300, and training and iterating the network parameters are continued through B input images of the next group.
If the iteration number I of the network is equal to the preset iteration number I, the step 306 is continued.
For example, the preset number of iterations of this step may be M1. In addition, the number of iterations of the network may be the same or different at different time steps.
In addition, in multiple iterative trainings of the network, a first training result can be called a first inner ring updating machine learning model, and after the iteration is performed again, the first training result can be called a second inner ring updating machine learning model, and so on.
In step 506, a trained network is obtained.
Each Network is trained according to the process shown in fig. 3, where training refers to training at each time step of reinforcement learning. These N networks may be trained in parallel, resulting in the following parameters: { omega [ [ omega ] ]T,n}n=1:NAnd obtaining the network parameters of each of the N networks. Wherein, ω isT,nIndicating that the network parameter is a parameter of the nth network in the T-th time step of reinforcement learning.
Then, based on the networks obtained by N training, the accuracy of the networks can be verified through the verification data set. As can be seen from the illustration of fig. 1, each network can obtain an accuracy Acc. In some embodiments, the accuracy is taken as an example of a performance parameter of the network, but in other examples, other parameters may be used as the representation of the performance parameter.
The obtaining of the accuracy rate may be processing at least one test image in the test image set through a network obtained by training to obtain an image processing result, and obtaining the accuracy rate based on the image processing result.
The accuracy rate is used as a reward value reward for reinforcement learning, and the reinforcement distribution parameters can be updated through a reinforcement learning algorithm based on the reward value reward. The disclosed embodiments may use a reinforcement learning strategy gradient algorithm REINFORCE to update reinforcement distribution parameters based on the accuracy of each network.
Equation (2) below illustrates the REINFORCE algorithm:
formula (2), acc (w) as aboveT,n) Is the verification set accuracy for the nth network at the T time step of reinforcement learning,is the data enhancement operation of the nth network, which is sampled according to the probability distribution. Therefore, according to equation (2), the average gradient of the enhancement distribution parameter of the plurality of networks is determined according to the accuracy of each network and the data enhancement operation of the network sampling.
As shown in the above formula (2), still taking the performance parameter as the accuracy rate as an example, the model update parameter of each path can be obtained according to the accuracy rate, and the model update parameter can be the gradient of the network of the path, and the gradients of the paths are averaged to obtain the average gradient (i.e., the average update parameter).
Further, before the equation (2) is executed, the accuracy of each of the plurality of paths may be normalized and calculated based on the normalized accuracy.
Based on the above average gradient, the enhancement distribution parameter can be updated as follows equation (3):
wherein,ηθis the learning rate of the probability distribution. The updated value of the hyper-parameter according to the formula (3) may be referred to as a first updated value, and of course, after iterative updating is performed for a plurality of times, a second updated value, a third updated value, and the like of the hyper-parameter may be obtained.
As above, in one time step, based on the accuracy of the network in the verification set, the enhancement distribution parameters are updated, and the parameter management server 11 may update the probability distribution according to the updated enhancement distribution parameters, so as to continue the sampling data enhancement operation according to the updated probability distribution in the next time step.
Referring to fig. 3, the dashed arrows 13 indicate that the accuracy of each network is fed back to the parameter management server 11, and the parameter management server 11 can update the probability distribution p according to the above-mentioned REINFORCE algorithmθ. The solid arrow 14 indicates that the iteration for the next time step is continued based on the updated probability distribution. In one embodiment, the network parameter of the highest accuracy network may be synchronized to all networks of the controller before the next iteration starts, as shown in fig. 3, the network parameter of the highest accuracy network is parameter 15, which is applied to all networks as indicated by synchronization arrow 16. In other examples, other manners of unifying the network parameters before the iteration starts may be adopted.
The updated network continues sampling based on the updated probability distribution and begins the iterative update at the next time step. In the iterative update of the next time step, the obtained network may be referred to as a second updated machine learning model, and the value of the hyper-parameter is updated to a second updated value according to the performance parameter of the second updated machine learning model of each of the plurality of paths.
Until reaching the preset updating time step T of reinforcement learningmaxIn the process, the network with the highest accuracy is used as the FinalNet, namely, the optimal network obtained by training, and the optimal network can also be called as a target machine learning model. Meanwhile, the enhanced distribution parameter used in the last time step is used as the final optimized hyper-parameterI.e. the final value of the hyperparameter.
The process flow of the method for determining the model hyperparameters shown in fig. 4 and 5 can be represented by the process flow shown in table 1, wherein the hyperparameters are obtainedAnd meanwhile, obtaining model weight parameters. It is understood that other flow schemes may be adopted, as long as the method idea of determining the model hyperparameter is consistent with any embodiment of the disclosure.
The following illustrates the flow of a method of determining a model hyperparameter:
for example, the initialization hyperparameter is θ0And initializing the network parameters of each network to w0
For each time step of reinforcement learning (total T ═ T)maxTime steps), the following processes are executed:
each network of the controller carries out I-I iterative updates, and each iterative update (I is more than or equal to 0 and less than or equal to I) updates the network parameters according to the formula (1)When I times of iteration are finished, obtaining a network parameter wT,nW ofT,nThe network parameters of the nth network in the T-th time step of reinforcement learning are shown, wherein N networks are shared.
When the network parameters w of the N networks are obtainedT,nThen, calculating the average gradient of the plurality of networks to the enhancement distribution parameter according to formula (2)And updates the hyper-parameter according to equation (3), the hyper-parameter used at the Tth time step of reinforcement learning can be expressed as thetaT
Before network training is carried out at the next time step of reinforcement learning, selecting the network parameter w of the network with the highest accuracy on the verification set from the N networksTAnd synchronously applying the network parameters to the N networks so that the N networks have the same iterative training starting point.
When reaching the preset reinforcement learning time step TmaxThen, the final network parameter wT is obtainedmaxAnd hyperparametric
In addition, in some embodiments, the predetermined update time step T is a predetermined update time step T for achieving reinforcement learningmaxAs the cutoff condition for the hyper-parametric optimization, in other examples, other cutoff conditions may be adopted. In addition, in this example, the hyper-parameter optimization is taken as an example, a trained network is obtained at the same time, or the trained network is not obtained at last, but the hyper-parameter obtained by the final optimization is used to retrain an initial machine learning model of the initialized model parameters, so as to obtain a trained target machine learning model.
After the hyper-parameters are obtained, when a network needs to be trained, if the search space of the data enhancement operation adopted by the network is the same as the search space during the hyper-parameter optimization, the hyper-parameters can be directly used for data enhancement during the network training, so that the network can obtain better performance. If the search space of the data enhancement operation corresponding to the trained network is changed, the optimization method provided by the disclosure can be reused to search for new hyperparameters, namely, a better data enhancement strategy is searched again.
Table 1 below can illustrate the experimental effect of the method for determining the hyper-parameters of the model provided in any embodiment of the present disclosure. Referring to table 1, ResNet-18, widerenet-28-10, etc. on the left side represent different network structures, Baseline, Cutout, etc. on the top of the table represent different data enhancement policy acquisition methods, wherein OHL-Auto-Aug represents methods using the present disclosure. The data enhancement strategies obtained by various different methods are applied to the training of various network structures, and experiments prove that the error rate of the network obtained by training can be reduced by about 30% compared with other various methods. The super-parameter training machine learning model obtained by the method can improve the accuracy and other performances of the obtained model.
TABLE 1 comparison of the effectiveness of the disclosed method with other methods
Fig. 6 provides an apparatus for determining a hyper-parameter of a model, as shown in fig. 6, which may include: an initialization module 61, a model training module 62, a hyper parameter update module 63, and a hyper parameter acquisition module 64.
An initialization module 61, configured to determine an initial value of the hyper-parameter;
a model training module 62, configured to perform M1 iterative training on the initial machine learning model through each of a plurality of parallel paths according to the initial value of the hyper-parameter and the sample image set, to obtain a first updated machine learning model for each path, where training parameters of different paths in the plurality of paths have different values obtained by sampling based on the hyper-parameter, and M1 is greater than or equal to 1 and less than or equal to the first value;
a hyper-parameter updating module 63, configured to update a value of the hyper-parameter to a first updated value based on the performance parameter of the first updated machine learning model of each path of the plurality of paths; (ii) a
A hyper-parameter obtaining module 64, configured to perform M2 iterative training and further value updating of the hyper-parameters on the first updated machine learning models of the multiple paths based on the first updated values of the hyper-parameters and the sample image set until a preset cutoff condition is reached, and obtain final values of the hyper-parameters, where M2 is greater than or equal to 1 and less than or equal to the first values.
In some optional embodiments, the hyper-parameter acquisition module 64 is further configured to: before the performing M2 times of iterative training and further value updating of the hyper-parameter on the first updated machine learning models of the plurality of paths, selecting a first target updated machine learning model from the first updated machine learning models of the plurality of paths; updating model parameters of a first updated machine learning model of the plurality of paths to model parameters of the first target updated machine learning model.
In some optional embodiments, the hyper-parameter obtaining module 64, when configured to select the first target updated machine learning model from the first updated machine learning models of the plurality of paths, includes: and selecting a first target updating machine learning model from the first updating machine learning models of the paths based on the performance parameters of the first updating machine learning models of the paths.
In some optional embodiments, the model training module 62 is specifically configured to: performing first iterative training on the initial machine learning model through each path in a plurality of paths based on the initial value of the hyper-parameter and at least one first sample image in the sample image set to obtain a first inner loop updating machine learning model of each path; performing second iterative training on the first inner-loop updating machine learning model of each path through each path in the plurality of paths based on the initial value of the hyper-parameter and at least one second sample image in the sample image set to obtain a second inner-loop updating machine learning model of each path; updating the machine learning model based on the second inner loop of each path of the plurality of paths to obtain a first updated machine learning model of each path.
In some optional embodiments, the model training module 62, when used to obtain the first inner loop update machine learning model for each path, comprises: sampling for multiple times based on the initial value of the hyper-parameter to obtain a first training parameter of each path in the multiple paths; and performing first iterative training on an initial machine learning model based on the first training parameter of each path in the paths and at least one first sample image in the sample image set to obtain a first inner loop updating machine learning model of each path.
In some optional embodiments, the training parameters used in the first iterative training and the second iterative training of each path are obtained by performing different sampling based on the initial value of the hyper-parameter.
In some optional embodiments, the hyper parameter update module 63 is specifically configured to: determining a model update parameter for each path of the plurality of paths based on a performance parameter of a first updated machine learning model for the each path; averaging the network updating parameters of the paths to obtain average updating parameters; and updating the numerical value of the hyperparameter into a first updating value according to the average updating parameter.
In some optional embodiments, the hyper parameter update module 63 is specifically configured to: normalizing the performance parameters of the first updated machine learning model of each of the plurality of paths; and updating the numerical value of the hyper-parameter to a first updated value based on the performance parameter of the first updated machine learning model of each path in the plurality of paths obtained after the normalization processing.
In some optional embodiments, the performance parameter comprises an accuracy rate.
In some optional embodiments, the hyper-parameter obtaining module 64 is specifically configured to: performing M2 iterative training on the first updated machine learning model of each path in the plurality of paths based on the first updated value of the hyper-parameter and the sample image set to obtain a second updated machine learning model of each path; updating the value of the hyperparameter to a second updated value based on the performance parameters of the second updated machine learning model for each of the plurality of paths.
In some optional embodiments, the hyper-parameters comprise enhancement distribution parameters for image enhancement processing of the sample image set; the model training module 62 is specifically configured to: determining enhancement probability distribution according to the enhancement distribution parameters, wherein the enhancement probability distribution comprises the probability of a plurality of image enhancement operations; based on the enhancement probability distribution, performing image enhancement processing on at least one sample image of each path by sampling target data enhancement operation of each path in the parallel paths in the data enhancement operations to obtain at least one enhanced image; performing M1 iterative training of the initial machine learning model based on the at least one enhanced image of each of the plurality of paths.
In some optional embodiments, the hyper-parameter updating module 63 is further configured to obtain the performance parameter of the first updated machine learning model, including the following processes: processing at least one test image in the test image set through the first updating machine learning model of each path in the plurality of paths to obtain an image processing result; and obtaining a performance parameter of a first updated machine learning model of each path based on the image processing result corresponding to each path in the plurality of paths.
In some optional embodiments, the preset cutoff condition comprises at least one of: updating the super-parameter for a preset updating time; or the performance of the updated machine learning model obtained by the plurality of paths reaches the target performance.
In some optional embodiments, the hyper parameter obtaining module 64 is further configured to select a target machine learning model from the finally updated machine learning models of the plurality of paths obtained when the preset cutoff condition is reached, where the target machine learning model is a trained machine learning model for image processing.
In some optional embodiments, the hyper-parameter obtaining module 64 is further configured to, after obtaining the final value of the hyper-parameter, train an initial machine learning model of the initialized model parameter based on the final value of the hyper-parameter, so as to obtain a trained target machine learning model.
The present disclosure also provides an electronic device comprising a memory for storing computer instructions executable on a processor, a processor for implementing a method of determining model hyper-parameters or a method of training a machine learning model according to any of the embodiments of the present disclosure when executing the computer instructions.
The present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of determining a model hyperparameter or a method of training a machine learning model according to any of the embodiments of the present disclosure.
Fig. 7 provides a flowchart of an image processing method in an embodiment of the present disclosure, and as shown in fig. 7, the method may include:
in step 700, an image to be processed is acquired.
This step does not limit what kind of image the to-be-processed image of the input model is.
In step 702, the to-be-processed image is processed by using a machine learning model, so as to obtain an image processing result, wherein a hyper-parameter of the machine learning model is determined by a method for determining a hyper-parameter of a model according to any embodiment of the present disclosure.
In some embodiments, since the hyper-parameters of the machine learning model for processing the image are determined by the method for determining the hyper-parameters of the model according to the present disclosure, the hyper-parameters have a better effect, and therefore, the image processing result obtained by using the model processing also has a good performance.
Fig. 8 illustrates a training method of a machine learning model according to at least one embodiment of the present disclosure, and as shown in fig. 8, the method may include:
in step 800, a final value of the hyperparameter is obtained.
Optionally, the final value of the super-parameter may be determined by a method for determining the super-parameter of the model provided in any embodiment of the present disclosure.
In step 802, an initial machine learning model with initial model parameters is trained based on the final values of the hyper-parameters, resulting in a target machine learning model.
As the hyper-parameters are determined by the method for determining the hyper-parameters in any embodiment of the disclosure, the effect of the hyper-parameters is better, and therefore, the machine learning model trained by the hyper-parameters also has better performance.
Fig. 9 illustrates a training apparatus for a machine learning model according to at least one embodiment of the present disclosure, where, as shown in fig. 9, the apparatus includes: a hyper-parameter acquisition module 91 and a model training module 92.
A hyper-parameter obtaining module 91, configured to obtain a final value of a hyper-parameter by using the method for determining a model hyper-parameter according to any embodiment of the present disclosure;
and the model training module 92 is configured to train an initial machine learning model with initial model parameters based on the final values of the hyper-parameters, so as to obtain a target machine learning model.
The present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of determining hyper-parameters of a model or a training method of a machine learning model according to any of the embodiments of the present disclosure.
The present disclosure also provides an electronic device comprising a memory for storing computer instructions executable on a processor, and a processor for implementing, when executing the computer instructions, a method for determining hyper-parameters of a model or a training method of a machine learning model according to any of the embodiments of the present disclosure.
One skilled in the art will appreciate that one or more embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program may be stored, where the computer program, when executed by a processor, implements the steps of the method for training a neural network for word recognition described in any of the embodiments of the present disclosure, and/or implements the steps of the method for word recognition described in any of the embodiments of the present disclosure. Wherein "and/or" means having at least one of the two, e.g., "multi and/or B" includes three schemes: poly, B, and "poly and B".
The embodiments in the disclosure are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the data processing apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to part of the description of the method embodiment.
The foregoing description of specific embodiments of the present disclosure has been described. Other embodiments are within the scope of the following claims. In some cases, the acts or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Embodiments of the subject matter and functional operations described in this disclosure may be implemented in: digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware including the structures disclosed in this disclosure and their structural equivalents, or a combination of one or more of them. Embodiments of the subject matter described in this disclosure can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a tangible, non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and transmit information to suitable receiver apparatus for execution by the data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The processes and logic flows described in this disclosure can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPG multi (field programmable gate array) or a SIC multi (application-specific integrated circuit).
Computers suitable for executing computer programs include, for example, general and/or special purpose microprocessors, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory and/or a random access memory. The basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Further, the computer may be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PD multi), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive, to name a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., an internal hard disk or a removable disk), magneto-optical disks, and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
Although this disclosure contains many specific implementation details, these should not be construed as limiting the scope of any disclosure or of what may be claimed, but rather as merely describing features of particular embodiments of the disclosure. Certain features that are described in this disclosure in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only for the purpose of illustrating the preferred embodiments of the present disclosure, and is not intended to limit the scope of the present disclosure, which is to be construed as being limited by the appended claims.

Claims (10)

1. A method of determining a model hyper-parameter, the method comprising:
determining an initial value of the hyper-parameter;
performing M1 times of iterative training on an initial machine learning model through each path in a plurality of paths in parallel according to the initial value of the hyper-parameter and the sample image set to obtain a first updated machine learning model of each path, wherein training parameters of different paths in the plurality of paths have different values obtained by sampling based on the hyper-parameter, and M1 is greater than or equal to 1 and smaller than or equal to the first value;
updating the value of the hyper-parameter to a first updated value based on the performance parameters of the first updated machine learning model for each of the plurality of paths;
performing M2 iterative training and further numerical updating of the hyper-parameters on the first updated machine learning model of the plurality of paths based on the first updated value of the hyper-parameters and the sample image set until a preset cutoff condition is reached, obtaining a final numerical value of the hyper-parameters, wherein M2 is greater than or equal to 1 and less than or equal to the first numerical value.
2. The method of claim 1, wherein updating the value of the hyperparameter to a first updated value based on the performance parameters of the first updated machine learning model for each path of the plurality of paths comprises:
determining a model update parameter for each path of the plurality of paths based on a performance parameter of a first updated machine learning model for the each path;
carrying out average processing on the model updating parameters of the paths to obtain average updating parameters;
and updating the numerical value of the hyperparameter into a first updating value according to the average updating parameter.
3. The method of claim 1 or 2, wherein prior to the updating the value of the hyperparameter to a first updated value based on the performance parameters of the first updated machine learning model for each of the plurality of paths, the method further comprises:
normalizing the performance parameters of the first updated machine learning model of each of the plurality of paths;
the updating the value of the hyperparameter to a first updated value based on the performance parameters of the first updated machine learning model for each path of the plurality of paths, comprising:
and updating the numerical value of the hyper-parameter to a first updated value based on the performance parameter of the first updated machine learning model of each path in the plurality of paths obtained after the normalization processing.
4. The method of any of claims 1 to 3, wherein the hyper-parameters comprise enhancement distribution parameters for image enhancement processing of the sample image set;
performing M1 iterative training on an initial machine learning model through each path in a plurality of paths in parallel according to the initial values of the hyper-parameters and the sample image set, wherein the iterative training comprises:
determining enhancement probability distribution according to the enhancement distribution parameters, wherein the enhancement probability distribution comprises the probability of a plurality of image enhancement operations;
based on the enhancement probability distribution, performing image enhancement processing on at least one sample image of each path by sampling target data enhancement operation of each path in the parallel paths in the data enhancement operations to obtain at least one enhanced image;
performing M1 iterative training of the initial machine learning model based on the at least one enhanced image of each of the plurality of paths.
5. A method for training a machine learning model, comprising:
obtaining a final value of the hyperparameter by the method of any one of claims 1 to 4;
and training an initial machine learning model with initial model parameters based on the final values of the hyper-parameters to obtain a target machine learning model.
6. An apparatus for determining a hyper-parameter of a model, the apparatus comprising:
the initialization module is used for determining the initial value of the hyper-parameter;
a model training module, configured to perform M1 iterative training on an initial machine learning model through each path of multiple paths in parallel according to the initial value of the hyper-parameter and the sample image set, to obtain a first updated machine learning model of each path, where training parameters of different paths in the multiple paths have different values obtained by sampling based on the hyper-parameter, and M1 is greater than or equal to 1 and less than or equal to the first value;
a hyper-parameter updating module for updating a value of the hyper-parameter to a first updated value based on the performance parameter of the first updated machine learning model for each of the plurality of paths;
and the hyper-parameter acquisition module is used for performing M2 times of iterative training and further value updating of the hyper-parameters on the first updated machine learning models of the paths based on the first updated values of the hyper-parameters and the sample image set until a preset cut-off condition is reached to obtain final values of the hyper-parameters, wherein M2 is greater than or equal to 1 and less than or equal to the first values.
7. The apparatus of claim 6, wherein the preset cutoff condition comprises at least one of:
updating the super-parameter for a preset updating time;
or the performance of the updated machine learning model obtained by the plurality of paths reaches the target performance.
8. A training apparatus for a machine learning model, comprising:
a hyper-parameter obtaining module for obtaining a final value of the hyper-parameter by the method of any one of claims 1 to 4;
and the model training module is used for training an initial machine learning model with initial model parameters based on the final values of the hyper-parameters to obtain a target machine learning model.
9. An electronic device, comprising a memory for storing computer instructions executable on a processor, the processor being configured to implement the method of one of claims 1 to 4 or the method of claim 5 when executing the computer instructions, and a processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 4, or carries out the method of claim 5.
CN201910384551.5A 2019-05-09 2019-05-09 Method and device for determining model hyper-parameters and training model and storage medium Active CN110110861B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910384551.5A CN110110861B (en) 2019-05-09 2019-05-09 Method and device for determining model hyper-parameters and training model and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910384551.5A CN110110861B (en) 2019-05-09 2019-05-09 Method and device for determining model hyper-parameters and training model and storage medium

Publications (2)

Publication Number Publication Date
CN110110861A true CN110110861A (en) 2019-08-09
CN110110861B CN110110861B (en) 2021-11-26

Family

ID=67489108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910384551.5A Active CN110110861B (en) 2019-05-09 2019-05-09 Method and device for determining model hyper-parameters and training model and storage medium

Country Status (1)

Country Link
CN (1) CN110110861B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110889450A (en) * 2019-11-27 2020-03-17 腾讯科技(深圳)有限公司 Method and device for super-parameter tuning and model building
CN111061875A (en) * 2019-12-10 2020-04-24 深圳追一科技有限公司 Hyper-parameter determination method, device, computer equipment and storage medium
CN111260074A (en) * 2020-01-09 2020-06-09 腾讯科技(深圳)有限公司 Method for determining hyper-parameters, related device, equipment and storage medium
CN111275170A (en) * 2020-01-19 2020-06-12 腾讯科技(深圳)有限公司 Model training method and related device
CN111539177A (en) * 2020-04-22 2020-08-14 中国科学院微电子研究所 Method, device and medium for determining hyper-parameters of layout feature extraction
CN111613287A (en) * 2020-03-31 2020-09-01 武汉金域医学检验所有限公司 Report coding model generation method, system and equipment based on Glow network
CN111695624A (en) * 2020-06-09 2020-09-22 北京市商汤科技开发有限公司 Data enhancement strategy updating method, device, equipment and storage medium
CN112052942A (en) * 2020-09-18 2020-12-08 支付宝(杭州)信息技术有限公司 Neural network model training method, device and system
CN113555008A (en) * 2020-04-17 2021-10-26 阿里巴巴集团控股有限公司 Parameter adjusting method and device for model
CN113762327A (en) * 2020-06-05 2021-12-07 宏达国际电子股份有限公司 Machine learning method, machine learning system and non-transitory computer readable medium
CN113807397A (en) * 2021-08-13 2021-12-17 北京百度网讯科技有限公司 Training method, device, equipment and storage medium of semantic representation model
CN114970879A (en) * 2021-02-26 2022-08-30 通用电气精准医疗有限责任公司 Automated data enhancement in deep learning

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102591917A (en) * 2011-12-16 2012-07-18 华为技术有限公司 Data processing method and system and related device
US20140344193A1 (en) * 2013-05-15 2014-11-20 Microsoft Corporation Tuning hyper-parameters of a computer-executable learning algorithm
WO2017128961A1 (en) * 2016-01-30 2017-08-03 华为技术有限公司 Method and device for training model in distributed system
CN107018184A (en) * 2017-03-28 2017-08-04 华中科技大学 Distributed deep neural network cluster packet synchronization optimization method and system
CN107209873A (en) * 2015-01-29 2017-09-26 高通股份有限公司 Hyper parameter for depth convolutional network is selected
CN108021983A (en) * 2016-10-28 2018-05-11 谷歌有限责任公司 Neural framework search
CN108229647A (en) * 2017-08-18 2018-06-29 北京市商汤科技开发有限公司 The generation method and device of neural network structure, electronic equipment, storage medium
US20180225391A1 (en) * 2017-02-06 2018-08-09 Neural Algorithms Ltd. System and method for automatic data modelling
CN109272118A (en) * 2018-08-10 2019-01-25 北京达佳互联信息技术有限公司 Data training method, device, equipment and storage medium
CN109299142A (en) * 2018-11-14 2019-02-01 中山大学 A kind of convolutional neural networks search structure method and system based on evolution algorithm
US20190095785A1 (en) * 2017-09-26 2019-03-28 Amazon Technologies, Inc. Dynamic tuning of training parameters for machine learning algorithms
CN109657805A (en) * 2018-12-07 2019-04-19 泰康保险集团股份有限公司 Hyper parameter determines method, apparatus, electronic equipment and computer-readable medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102591917A (en) * 2011-12-16 2012-07-18 华为技术有限公司 Data processing method and system and related device
US20140344193A1 (en) * 2013-05-15 2014-11-20 Microsoft Corporation Tuning hyper-parameters of a computer-executable learning algorithm
CN107209873A (en) * 2015-01-29 2017-09-26 高通股份有限公司 Hyper parameter for depth convolutional network is selected
WO2017128961A1 (en) * 2016-01-30 2017-08-03 华为技术有限公司 Method and device for training model in distributed system
CN108021983A (en) * 2016-10-28 2018-05-11 谷歌有限责任公司 Neural framework search
US20180225391A1 (en) * 2017-02-06 2018-08-09 Neural Algorithms Ltd. System and method for automatic data modelling
CN107018184A (en) * 2017-03-28 2017-08-04 华中科技大学 Distributed deep neural network cluster packet synchronization optimization method and system
CN108229647A (en) * 2017-08-18 2018-06-29 北京市商汤科技开发有限公司 The generation method and device of neural network structure, electronic equipment, storage medium
US20190095785A1 (en) * 2017-09-26 2019-03-28 Amazon Technologies, Inc. Dynamic tuning of training parameters for machine learning algorithms
CN109272118A (en) * 2018-08-10 2019-01-25 北京达佳互联信息技术有限公司 Data training method, device, equipment and storage medium
CN109299142A (en) * 2018-11-14 2019-02-01 中山大学 A kind of convolutional neural networks search structure method and system based on evolution algorithm
CN109657805A (en) * 2018-12-07 2019-04-19 泰康保险集团股份有限公司 Hyper parameter determines method, apparatus, electronic equipment and computer-readable medium

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BARRET ZOPH ET AL.: "Neural Architecture Search with Reinforcement Learning", 《HTTPS://ARXIV.ORG/ABS/1611.01578V2》 *
EKIN D.CUBUK ET AL.: "autoAugment:learning augmentation policies from data", 《HTTPS://ARXIV.ORG/ABS/1805.09501V2》 *
JAMES BERGSTRA ET AL.: "Random Search for Hyper-Parameter Optimization", 《JOURNAL OF MACHINE LEARNING RESEARCH》 *
VOLODYMYR MNIH ET AL.: "Asynchronous methods for deep reinforcement learning", 《HTTPS://ARXIV.ORG/ABS/1602.01783V2》 *
朱汇龙 等: "基于人群的神经网络超参数优化的研究", 《信息技术》 *
陆高: "基于智能计算的超参数优化及其应用研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110889450B (en) * 2019-11-27 2023-08-11 腾讯科技(深圳)有限公司 Super-parameter tuning and model construction method and device
CN110889450A (en) * 2019-11-27 2020-03-17 腾讯科技(深圳)有限公司 Method and device for super-parameter tuning and model building
CN111061875B (en) * 2019-12-10 2023-09-15 深圳追一科技有限公司 Super parameter determination method, device, computer equipment and storage medium
CN111061875A (en) * 2019-12-10 2020-04-24 深圳追一科技有限公司 Hyper-parameter determination method, device, computer equipment and storage medium
CN111260074A (en) * 2020-01-09 2020-06-09 腾讯科技(深圳)有限公司 Method for determining hyper-parameters, related device, equipment and storage medium
CN111260074B (en) * 2020-01-09 2022-07-19 腾讯科技(深圳)有限公司 Method for determining hyper-parameters, related device, equipment and storage medium
CN111275170A (en) * 2020-01-19 2020-06-12 腾讯科技(深圳)有限公司 Model training method and related device
CN111275170B (en) * 2020-01-19 2023-11-24 腾讯科技(深圳)有限公司 Model training method and related device
CN111613287A (en) * 2020-03-31 2020-09-01 武汉金域医学检验所有限公司 Report coding model generation method, system and equipment based on Glow network
CN113555008A (en) * 2020-04-17 2021-10-26 阿里巴巴集团控股有限公司 Parameter adjusting method and device for model
CN111539177A (en) * 2020-04-22 2020-08-14 中国科学院微电子研究所 Method, device and medium for determining hyper-parameters of layout feature extraction
TWI831016B (en) * 2020-06-05 2024-02-01 宏達國際電子股份有限公司 Machine learning method, machine learning system and non-transitory computer-readable storage medium
CN113762327A (en) * 2020-06-05 2021-12-07 宏达国际电子股份有限公司 Machine learning method, machine learning system and non-transitory computer readable medium
WO2021248791A1 (en) * 2020-06-09 2021-12-16 北京市商汤科技开发有限公司 Method and apparatus for updating data enhancement strategy, and device and storage medium
JP2022541370A (en) * 2020-06-09 2022-09-26 ベイジン・センスタイム・テクノロジー・デベロップメント・カンパニー・リミテッド Data enrichment policy update method, apparatus, device and storage medium
TWI781576B (en) * 2020-06-09 2022-10-21 大陸商北京市商湯科技開發有限公司 Method, equipment and storage medium for updating data enhancement strategy
CN111695624A (en) * 2020-06-09 2020-09-22 北京市商汤科技开发有限公司 Data enhancement strategy updating method, device, equipment and storage medium
CN111695624B (en) * 2020-06-09 2024-04-16 北京市商汤科技开发有限公司 Updating method, device, equipment and storage medium of data enhancement strategy
CN112052942B (en) * 2020-09-18 2022-04-12 支付宝(杭州)信息技术有限公司 Neural network model training method, device and system
CN112052942A (en) * 2020-09-18 2020-12-08 支付宝(杭州)信息技术有限公司 Neural network model training method, device and system
CN114970879A (en) * 2021-02-26 2022-08-30 通用电气精准医疗有限责任公司 Automated data enhancement in deep learning
CN113807397A (en) * 2021-08-13 2021-12-17 北京百度网讯科技有限公司 Training method, device, equipment and storage medium of semantic representation model
CN113807397B (en) * 2021-08-13 2024-01-23 北京百度网讯科技有限公司 Training method, training device, training equipment and training storage medium for semantic representation model

Also Published As

Publication number Publication date
CN110110861B (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN110110861B (en) Method and device for determining model hyper-parameters and training model and storage medium
US20210125038A1 (en) Generating Natural Language Descriptions of Images
CN110476172B (en) Neural architecture search for convolutional neural networks
CN111406264B (en) Neural architecture search
RU2666631C2 (en) Training of dnn-student by means of output distribution
JP2017097807A (en) Learning method, learning program, and information processing device
CN110942154A (en) Data processing method, device, equipment and storage medium based on federal learning
CN110799995A (en) Data recognizer training method, data recognizer training device, program, and training method
CN109886343B (en) Image classification method and device, equipment and storage medium
CN115688913A (en) Cloud-side collaborative personalized federal learning method, system, equipment and medium
TWI765264B (en) Device and method of handling image super-resolution
KR102129161B1 (en) Terminal device and Method for setting hyperparameter of convolutional neural network
CN111695624B (en) Updating method, device, equipment and storage medium of data enhancement strategy
WO2020162205A1 (en) Optimization device, method, and program
KR20190043720A (en) Confident Multiple Choice Learning
US20190325983A1 (en) Method and system for performing molecular design using machine learning algorithms
US11574181B2 (en) Fusion of neural networks
CN117435896A (en) Verification aggregation method without segmentation under unbalanced classification scene
CN116108893A (en) Self-adaptive fine tuning method, device and equipment for convolutional neural network and storage medium
US20220292342A1 (en) Communication Efficient Federated/Distributed Learning of Neural Networks
CN110765870B (en) Confidence degree determination method and device of OCR recognition result and electronic equipment
KR102592587B1 (en) Apparatus and method for correcting speech recognition result
CN112488319A (en) Parameter adjusting method and system with self-adaptive configuration generator
KR20170068255A (en) Speech recognition method and server
US20240135184A1 (en) Constrained search: improve multi-objective nas quality by focus on demand

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 1101-1117, floor 11, No. 58, Beisihuan West Road, Haidian District, Beijing 100080

Applicant after: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT Co.,Ltd.

Address before: 100084, room 7, floor 3, building 1, No. 710-712, Zhongguancun East Road, Beijing, Haidian District

Applicant before: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT Co.,Ltd.

GR01 Patent grant
GR01 Patent grant