CN111612084A - Optimization method of deep non-negative matrix factorization network with classifier - Google Patents
Optimization method of deep non-negative matrix factorization network with classifier Download PDFInfo
- Publication number
- CN111612084A CN111612084A CN202010456563.7A CN202010456563A CN111612084A CN 111612084 A CN111612084 A CN 111612084A CN 202010456563 A CN202010456563 A CN 202010456563A CN 111612084 A CN111612084 A CN 111612084A
- Authority
- CN
- China
- Prior art keywords
- nmf
- layer
- network
- deep
- classifier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000011159 matrix material Substances 0.000 title claims abstract description 55
- 238000005457 optimization Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000012549 training Methods 0.000 claims abstract description 20
- 238000013507 mapping Methods 0.000 claims abstract description 16
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 6
- 238000012360 testing method Methods 0.000 claims abstract description 3
- 230000008569 process Effects 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 claims description 2
- 238000001228 spectrum Methods 0.000 claims description 2
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000010276 construction Methods 0.000 abstract 2
- 238000003745 diagnosis Methods 0.000 abstract 1
- 238000012544 monitoring process Methods 0.000 abstract 1
- 239000010410 layer Substances 0.000 description 50
- 230000006870 function Effects 0.000 description 24
- 238000000605 extraction Methods 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 5
- 230000008520 organization Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 239000002356 single layer Substances 0.000 description 3
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2133—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on naturality criteria, e.g. with non-negative factorisation or negative correlation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Algebra (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to an optimization method of a deep non-Negative Matrix Factorization (NMF) network with a classifier, and belongs to the technical field of artificial intelligence. The method comprises the steps of model construction and parameter optimization, wherein in the aspect of model construction, a plurality of NMF layers and classification layers are cascaded to form a deep network, namely, a decomposition result of a previous NMF layer is used as the input of a next NMF layer, and different NMF layers are connected by adopting a mapping function. In the aspect of parameter optimization, unsupervised layer-by-layer pre-training is carried out on the deep NMF network based on multiplicative iteration rules, and the supervised global optimization carries out overall optimization on weight parameters of each NMF layer and the Softmax classification layer based on a BP algorithm. And analyzing the test data by using the deep NMF network optimized by training to obtain a classification output result. The invention is suitable for classification and identification task related applications such as state monitoring and diagnosis.
Description
Technical Field
The invention belongs to the technical field of artificial intelligence, and relates to an optimization method of a deep non-negative matrix factorization network with a classifier.
Background
Non-negative matrix factorization is a novel matrix factorization idea, all components after factorization are required to be non-negative values (pure additive description is required), meanwhile, nonlinear dimensionality reduction and sparsity feature representation can be achieved, and the non-negative matrix factorization is successfully applied to the fields of image processing, computer vision, pattern recognition, biomedicine, signal processing and the like. However, the feature expression capability of the traditional single-layer network is limited, in order to further improve the classification or regression capability of the network, deep learning is an important research direction in the field of machine learning in recent years, the deep network can extract input data step by constructing multiple abstract layers with self-learning capability, so that higher-level feature representation of the data is realized, and the deep network has achieved remarkable results in many classification and recognition tasks at present.
Under the background, the combination of the non-negative matrix factorization idea and the deep learning is expected to give consideration to the advantages of the non-negative matrix factorization idea and the deep learning, the feature extraction performance of the single-layer NMF is improved by constructing the deep non-negative matrix factorization network, and meanwhile, the NMF purely additive description provides a visual and understandable hierarchical feature learning process for the deep network. Deep NMF networks (Guo Z, Zhang S. spark deep networking organization [ J ]. BigData Miningand data, 2020,3(1):13-28.SongHA, Kim B, Xuan T L, et al, Hierarchical feature extraction by multi-layer-Hierarchical organization network for classification of the organization [ J ]. neuro-expression, 2015,165:63-74.Cichocki A, Zduct R. multilayern organization, Electron.Lett.2006,42(16): 947-. However, the existing deep NMF network only trains and optimizes the NMF feature extraction layer, and does not realize the unified model global optimization of the NMF feature extraction layer and the classification layer. Aiming at the problem, the invention provides a multi-layer non-negative matrix factorization network with a deep learning structure and a classifier, and designs a model parameter optimization method combining unsupervised layer-by-layer pre-training and supervised global optimization.
Disclosure of Invention
In view of the above, the present invention provides an optimization method for a deep non-negative matrix factorization network with a classifier. By analyzing the basic structures of the nonnegative matrix factorization and the deep network, the cascade thought is adopted to construct a multilayer nonnegative matrix factorization network with a deep learning structure and a classifier, and the unsupervised layer-by-layer pre-training and supervised global optimization item combination method is utilized to realize the unified model global optimization of the deep model feature extraction layer and the classification layer.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for optimizing a deep non-negative matrix factorization network with a classifier comprises the following steps:
s1: inputting raw data, training set is { Xk,GkAnd preprocessing the input data of the model, wherein k is 1,2
S2: setting the number of NMF layers in a deep network to be L, and setting the low-dimensional characteristic space dimension of each NMF layer to be r(1),r(2),...,r(L)The method comprises the steps that a plurality of NMF layers and classification layers are cascaded to construct a deep NMF network with a classifier, a decomposition result of a previous NMF layer is used as the input of a next NMF layer, and different NMF layers are connected through a mapping function;
s3: carrying out unsupervised pre-training on each layer of NMF on the deep NMF network constructed in the step S2 based on multiplicative iteration rules;
s4: carrying out supervised global optimization on the pre-training depth NMF network obtained in the step S3 on the connection weight parameters of each NMF layer and the Softmax classified layer based on a BP algorithm;
s5: and analyzing the input test data sample according to the training-optimized deep NMF network obtained in the step S4 to obtain a corresponding classification output result.
Optionally, in step S1, a short-time fourier transform time-frequency analysis method is used to pre-process the input data computation time-frequency amplitude spectrum.
In the step S2, a band classifier depth NMF network is constructed using Softmax as a classification layer function.
Optionally, in step S3, performing unsupervised pre-training on the deep NMF network based on multiplicative iteration rules:
s31: data X(i-1)Inputting the network into the i-th NMF network;
s32: setting algorithm termination threshold e and maximum iteration number tmaxInitializing the basis vector matrix W(i)And a low dimensional feature matrix H(i)(ii) a Is provided withIs an N-dimensional vector data set, then the basis vector matrixLow dimensional feature matrixr is a low dimensional feature space dimension, generally r is much smaller than N and N, and satisfies ((N + N) r < Nn;
s33: updating the basis vector matrix W(i)And a low dimensional feature matrix H(i)The iterative update rule is defined as:
s34: calculating an objective function value C of an NMF layer in a pre-training stage, wherein C is defined as
S35: comparing the objective function values C(t)And C(t+1)If C | |(t+1)-C(t)||<e is established or reaches the maximum number of iterations tmaxIf so, the algorithm is terminated and the basis vector matrix W of the i-th NMF network is obtained(i)And low dimensional feature H(i)Otherwise, looping step S33 through step S35;
s36: for low dimensional feature H(i)Mapping to obtain input data X of i +1 th NMF layer(i)The nonlinear mapping based on Sigmoid function is:
f(x)=1/(1+exp(-x))
s37: and repeating the steps S31 to S37 until i is larger than L, and unsupervised layer-by-layer pre-training of each NMF layer is completed.
Optionally, in step S4, performing supervised global optimization on the deep NMF network based on the BP algorithm:
s41: data X with label(i-1)Inputting the network into the i-th NMF network;
s42: setting algorithm termination threshold e and maximum iteration number tmaxInitializing the low-dimensional feature matrix H(i);
S43: fixed basis vector matrix W(i)For low dimensional feature matrix H(i)And (4) iterative updating, wherein the iterative rule is as follows:
s44: calculating an objective function value C of an NMF layer in a supervised global optimization stage, wherein C is defined as
S45: comparing the objective function values C(t)And C(t+1)If C | |(t+1)-C(t)||<e is established or reaches the maximum number of iterations tmaxIf yes, the algorithm is terminated, and the low-dimensional characteristic H of the ith layer NMF network is obtained(i)Otherwise, looping step S43 through step S45;
s46: computing i-th NMF network cost function
Wherein,for weight constraint term, α is the coefficient of balance weight constraint term, f (w)ij) Is defined as:
s47: for low dimensional feature H(i)Mapping to obtain input data X of i +1 th NMF layer(i)The nonlinear mapping based on Sigmoid function is:
f(x)=1/(1+exp(-x))
s48: repeating steps S41 through S47 until i > L;
s49: output X of L-th NMF(L)Inputting the value into a Softmax classifier, and calculating a classification layer cost function value by the following formula:
wherein,is a Softmax misclassification cost function, K is the number of classes, yrLabeling the sample class;
s410: and (3) calculating the total cost function value of the NMF network with the classifier depth, wherein the formula is as follows:
wherein, WDNIncluding NMF layers and Softmax tier weight parameters.
S411: all layers of the NMF network with the classifier depth are regarded as a model, and based on a gradient descent algorithm, the overall cost function value of the network is minimized through multiple iterations. And calculating the output of each layer, reconstructing the error of each layer, correcting corresponding parameters according to the errors, and optimizing weight parameters of each NMF layer and the Softmax classification layer.
The invention has the beneficial effects that: the NMF algorithm has the characteristics of high convergence rate and small storage space of left and right non-negative matrixes, can reduce the dimension of a high-dimensional data matrix, obtains a low-dimensional matrix by decomposition, has natural sparsity and robustness, and is suitable for processing large-scale data. The deep NMF network improves the feature extraction performance of a single-layer NMF and provides a visual and understandable hierarchical feature learning process for the deep network.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic diagram of a deep non-negative matrix factorization network with classifiers according to the present invention;
FIG. 2 is a flowchart of classification and identification based on deep non-negative matrix factorization network.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
Referring to fig. 1 to 2, an optimization method for a deep NMF network with a classifier, which performs unsupervised layer-by-layer pre-training on the deep NMF network based on a multiplicative iteration rule, specifically includes the following steps:
1) data X(i-1)Inputting the network into the i-th NMF network;
2) setting algorithm termination threshold e and maximum iteration number tmaxInitializing the basis vector matrix W(i)And a low dimensional feature matrix H(i)(ii) a Is provided withIs an N-dimensional vector data set, then the basis vector matrixLow dimensional feature matrixr is a low-dimensional feature space dimension, and is generally much smaller than N and N and satisfies ((N + N))r<Nn;
3) Updating the basis vector matrix W(i)And a low dimensional feature matrix H(i)The iterative update rule is defined as:
4) calculating an objective function value C of an NMF layer in a pre-training stage, wherein C is defined as
5) Comparing the objective function values C(t)And C(t+1)If C | |(t+1)-C(t)||<e is established or reaches the maximum number of iterations tmaxIf so, the algorithm is terminated and the basis vector matrix W of the i-th NMF network is obtained(i)And low dimensional feature H(i)Otherwise, circulating the step 3) to the step 5);
6) for low dimensional feature H(i)Mapping to obtain input data X of i +1 th NMF layer(i)The nonlinear mapping based on Sigmoid function is:
f(x)=1/(1+exp(-x))
7) and repeating the steps S31 to S37 until i is larger than L, and unsupervised layer-by-layer pre-training of each NMF layer is completed.
Carrying out supervised global optimization on the deep NMF network based on a BP algorithm, and specifically comprising the following steps:
1) data X with label(i-1)Inputting the network into the i-th NMF network;
2) setting algorithm termination threshold e and maximum iteration number tmaxInitializing the low-dimensional feature matrix H(i);
3) Fixed basis vector matrix W(i)For low dimensional feature matrix H(i)And (4) iterative updating, wherein the iterative rule is as follows:
4) s44: calculating an objective function value C of an NMF layer in a supervised global optimization stage, wherein C is defined as
5) Comparing the objective function values C(t)And C(t+1)If C | |(t+1)-C(t)||<e is established or reaches the maximum number of iterations tmaxIf yes, the algorithm is terminated, and the low-dimensional characteristic H of the ith layer NMF network is obtained(i)Otherwise, circulating the step 3) to the step 5);
6) computing i-th NMF network cost function
Wherein,for weight constraint term, α is the coefficient of balance weight constraint term, f (w)ij) Is defined as
7) For low dimensional feature H(i)Mapping to obtain input data X of i +1 th NMF layer(i)The nonlinear mapping based on Sigmoid function is:
f(x)=1/(1+exp(-x))
8) repeating steps 1) to 7) until i is larger than L;
9) output X of L-th NMF(L)Inputting the value into a Softmax classifier, and calculating a classification layer cost function value by the following formula:
wherein,is a Softmax misclassification cost function, K is the number of classes, yrLabeling the sample class;
10) and (3) calculating the total cost function value of the NMF network with the classifier depth, wherein the formula is as follows:
wherein, WDNThe method comprises the steps of including weight parameters of each NMF layer and a Softmax classification layer;
11) all layers of the NMF network with the classifier depth are regarded as a model, and based on a gradient descent algorithm, the overall cost function value of the network is minimized through multiple iterations. And calculating the output of each layer, reconstructing the error of each layer, correcting corresponding parameters according to the errors, and further optimizing the weight parameters of each NMF layer and the Softmax classification layer.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.
Claims (5)
1. The utility model provides a take optimization method of classifier degree of depth non-negative matrix decomposition network, take classifier degree of depth non-negative matrix decomposition network includes input layer, NMF layer 1, NMF layer 2, … …, NMF layer L and classification layer, wherein adopts the mapping function to connect between the different NMF layers, its characterized in that: the method comprises the following steps:
s1: inputting raw data, training set is { Xk,GkWhere k 1,2, n, n is the number of training samples, model input data is pre-processed,recording the preprocessed data as
S2: setting the number of NMF layers in a deep network to be L, and setting the low-dimensional characteristic space dimension of each NMF layer to be r(1),r(2),...,r(L)The method comprises the steps that a plurality of NMF layers and classification layers are cascaded to construct a deep NMF network with a classifier, a decomposition result of a previous NMF layer is used as the input of a next NMF layer, and different NMF layers are connected through a mapping function;
s3: carrying out unsupervised pre-training on each layer of NMF on the deep NMF network constructed in the step S2 based on multiplicative iteration rules;
s4: carrying out supervised global optimization on the pre-training depth NMF network obtained in the step S3 on the connection weight parameters of each NMF layer and the Softmax classified layer based on a BP algorithm;
s5: and analyzing the input test data sample according to the training-optimized deep NMF network obtained in the step S4 to obtain a corresponding classification output result.
2. The optimization method of the deep non-negative matrix factorization network with the classifier according to claim 1, wherein: in step S1, a short-time fourier transform time-frequency analysis method is used to pre-process the input data computation time-frequency amplitude spectrum.
3. The optimization method of the deep non-negative matrix factorization network with the classifier according to claim 1, wherein: in the step S2, a band classifier depth NMF network is constructed using Softmax as a classification layer function.
4. The optimization method of the deep non-negative matrix factorization network with the classifier according to claim 1, wherein: in step S3, the step of unsupervised pre-training the deep NMF network based on the multiplicative iteration rule is as follows:
s31: data X(i-1)Inputting the network into the i-th NMF network;
s32: setting algorithm termination threshold e and maximum iteration number tmaxInitializing the basis vector matrix W(i)And a low dimensional feature matrix H(i)(ii) a Is provided withIs an N-dimensional vector data set, then the basis vector matrixLow dimensional feature matrixr is a low dimensional feature space dimension, generally r is much smaller than N and N, and satisfies ((N + N) r < Nn;
s33: updating the basis vector matrix W(i)And a low dimensional feature matrix H(i)The iterative update rule is defined as:
s34: calculating an objective function value C of an NMF layer in a pre-training stage, wherein C is defined as
S35: comparing the objective function values C(t)And C(t+1)If C | |(t+1)-C(t)||<e is established or reaches the maximum number of iterations tmaxIf so, the algorithm is terminated and the basis vector matrix W of the i-th NMF network is obtained(i)And low dimensional feature H(i)Otherwise, looping step S33 to stepS35;
S36: for low dimensional feature H(i)Mapping to obtain input data X of i +1 th NMF layer(i)The nonlinear mapping based on Sigmoid function is:
f(x)=1/(1+exp(-x))
s37: and repeating the steps S31 to S37 until i is larger than L, and unsupervised layer-by-layer pre-training of each NMF layer is completed.
5. The optimization method of the deep non-negative matrix factorization network with the classifier according to claim 1, wherein: in step S4, performing supervised global optimization on the deep NMF network based on the BP algorithm, where the steps of the supervised global optimization are as follows:
s41: data X with label(i-1)Inputting the network into the i-th NMF network;
s42: setting algorithm termination threshold e and maximum iteration number tmaxInitializing the low-dimensional feature matrix H(i);
S43: fixed basis vector matrix W(i)For low dimensional feature matrix H(i)And (4) iterative updating, wherein the iterative rule is as follows:
s44: and calculating an objective function value C of the NMF layer in the supervised global optimization stage, wherein C is defined as:
s45: comparing the objective function values C(t)And C(t+1)If C | |(t+1)-C(t)||<e is established or reaches the maximum number of iterations tmaxIf yes, the algorithm is terminated, and the low-dimensional characteristic H of the ith layer NMF network is obtained(i)Otherwise, looping step S43 through step S45;
s46: computing i-th NMF network cost function
Wherein,for weight constraint term, α is the coefficient of balance weight constraint term, f (w)ij) Is defined as:
s47: for low dimensional feature H(i)Mapping to obtain input data X of i +1 th NMF layer(i)The nonlinear mapping based on Sigmoid function is:
f(x)=1/(1+exp(-x))
s48: repeating steps S41 through S47 until i > L;
s49: output X of L-th NMF(L)Inputting the value into a Softmax classifier, and calculating a classification layer cost function value by the following formula:
wherein,is a Softmax misclassification cost function, K is the number of classes, yrLabeling the sample class;
s410: and (3) calculating the total cost function value of the NMF network with the classifier depth, wherein the formula is as follows:
wherein, WDNThe method comprises the steps of including weight parameters of each NMF layer and a Softmax classification layer;
s411: all layers of the deep NMF network with the classifier are regarded as a model, and the overall cost function value of the network is minimized through multiple iterations based on a gradient descent algorithm; and calculating the output of each layer, reconstructing the error of each layer, correcting corresponding parameters according to the errors, and optimizing weight parameters of each NMF layer and the Softmax classification layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010456563.7A CN111612084A (en) | 2020-05-26 | 2020-05-26 | Optimization method of deep non-negative matrix factorization network with classifier |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010456563.7A CN111612084A (en) | 2020-05-26 | 2020-05-26 | Optimization method of deep non-negative matrix factorization network with classifier |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111612084A true CN111612084A (en) | 2020-09-01 |
Family
ID=72202352
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010456563.7A Pending CN111612084A (en) | 2020-05-26 | 2020-05-26 | Optimization method of deep non-negative matrix factorization network with classifier |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111612084A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118096237A (en) * | 2024-03-08 | 2024-05-28 | 北京嘉华铭品牌策划有限公司广东分公司 | Deep learning driven customer behavior prediction model |
-
2020
- 2020-05-26 CN CN202010456563.7A patent/CN111612084A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118096237A (en) * | 2024-03-08 | 2024-05-28 | 北京嘉华铭品牌策划有限公司广东分公司 | Deep learning driven customer behavior prediction model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | A review on extreme learning machine | |
Louati et al. | Deep convolutional neural network architecture design as a bi-level optimization problem | |
Zhang et al. | Chromosome classification with convolutional neural network based deep learning | |
CN110942091B (en) | Semi-supervised few-sample image classification method for searching reliable abnormal data center | |
CN109063719B (en) | Image classification method combining structure similarity and class information | |
CN107944410B (en) | Cross-domain facial feature analysis method based on convolutional neural network | |
CN111612051B (en) | Weak supervision target detection method based on graph convolution neural network | |
CN112270345B (en) | Clustering algorithm based on self-supervision dictionary learning | |
WO2020095321A2 (en) | Dynamic structure neural machine for solving prediction problems with uses in machine learning | |
CN113033309A (en) | Fault diagnosis method based on signal downsampling and one-dimensional convolution neural network | |
Chen et al. | Application of improved convolutional neural network in image classification | |
CN112270334B (en) | Few-sample image classification method and system based on abnormal point exposure | |
CN116538127B (en) | Axial flow fan and control system thereof | |
CN109409434A (en) | The method of liver diseases data classification Rule Extraction based on random forest | |
Ahmad et al. | Eye diseases classification using hierarchical MultiLabel artificial neural network | |
CN109165576B (en) | Motion state identification method and device | |
CN114140645A (en) | Photographic image aesthetic style classification method based on improved self-supervision feature learning | |
CN116630816B (en) | SAR target recognition method, device, equipment and medium based on prototype comparison learning | |
CN111612084A (en) | Optimization method of deep non-negative matrix factorization network with classifier | |
CN113627240A (en) | Unmanned aerial vehicle tree species identification method based on improved SSD learning model | |
CN117093924A (en) | Rotary machine variable working condition fault diagnosis method based on domain adaptation characteristics | |
Yang et al. | A two-stage training framework with feature-label matching mechanism for learning from label proportions | |
CN110543888A (en) | image classification method based on cluster recurrent neural network | |
CN112560784B (en) | Electrocardiogram classification method based on dynamic multi-scale convolutional neural network | |
CN115496933A (en) | Hyperspectral classification method and system based on space-spectrum prototype feature learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200901 |
|
RJ01 | Rejection of invention patent application after publication |