Nothing Special   »   [go: up one dir, main page]

CN110427967A - The zero sample image classification method based on embedded feature selecting semanteme self-encoding encoder - Google Patents

The zero sample image classification method based on embedded feature selecting semanteme self-encoding encoder Download PDF

Info

Publication number
CN110427967A
CN110427967A CN201910566369.1A CN201910566369A CN110427967A CN 110427967 A CN110427967 A CN 110427967A CN 201910566369 A CN201910566369 A CN 201910566369A CN 110427967 A CN110427967 A CN 110427967A
Authority
CN
China
Prior art keywords
autoencoder
semantic
zero
embedded feature
feature selection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910566369.1A
Other languages
Chinese (zh)
Inventor
芦楠楠
周丙
张欣茹
胡小忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN201910566369.1A priority Critical patent/CN110427967A/en
Publication of CN110427967A publication Critical patent/CN110427967A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明公开了基于嵌入式特征选择语义自编码器的零样本图像分类方法。本发明利用嵌入式特征选择来优化语义自编码器的目标函数,使得到的映射矩阵尽可能稀疏化,来达到与语义属性相匹配的底层特征选择的目的;在测试阶段利用得到的稀疏化映射矩阵进行底层特征到语义属性的映射,就可以自动抑制起到消极作用的特征维度,增强起到积极作用的特征维度,达到特征匹配选择的目的,提高零样本图像分类的精度。

The invention discloses a zero-sample image classification method based on an embedded feature selection semantic autoencoder. The present invention uses embedded feature selection to optimize the objective function of the semantic autoencoder, so that the obtained mapping matrix is as sparse as possible, so as to achieve the purpose of selecting the underlying features that match the semantic attributes; the obtained sparse mapping is used in the test phase The matrix maps the underlying features to semantic attributes, which can automatically suppress the feature dimensions that play a negative role, enhance the feature dimensions that play a positive role, achieve the purpose of feature matching selection, and improve the accuracy of zero-shot image classification.

Description

基于嵌入式特征选择语义自编码器的零样本图像分类方法A Zero-Shot Image Classification Method Based on Embedded Feature Selection Semantic Autoencoders

技术领域technical field

本发明属于模式识别领域,特别涉及了一种零样本图像分类方法。The invention belongs to the field of pattern recognition, and in particular relates to a zero-sample image classification method.

背景技术Background technique

对于模式识别领域,零样本学习一直是人们研究的热点问题。由于人为标记样本的缺乏,已标记类别不可能涵盖所有的对象类,所以零样本问题的应用场景比多分类问题更符合实际情况。目前零样本图像分类的方法主要有:基于属性的方法;基于文本的方法;基于类别相似性的方法;结合不同中间层的方法。而目前零样本学习主要在用的是效果最好的“基于属性的方法”。For the field of pattern recognition, zero-shot learning has always been a hot research topic. Due to the lack of human-labeled samples, the labeled categories cannot cover all object classes, so the application scenario of the zero-shot problem is more in line with the actual situation than the multi-classification problem. At present, the methods of zero-shot image classification mainly include: attribute-based methods; text-based methods; category-based similarity methods; methods combining different intermediate layers. At present, zero-sample learning is mainly using the most effective "attribute-based method".

基于属性的零样本学习,就是将属性作为中间层,来进行可见类到不可见类知识的迁移。因为语义属性是一组描述类别对象特征的向量,比如“有皮毛的”、“有尾巴的”、“有四只脚的”等描述性质的词语,所以属性是可见类和不可见类所共有的,因此可以起到零样本学习中间层的作用。目前属性学习包括二值属性学习和相对属性学习两个方面,二者的区别是二值属性的属性值是“0”或“1”,来分别对应该类别此属性的“无”或“有”;而相对属性的属性值是一个连续的值,表示属性的相对强度值。相对属性的概念和设置更加符合人类的认知和实际情况,相应的其分类效果也就是精度在相同的情况下要优于二值属性。所以对于零样本分类,采取相对属性的模型方法效果更佳。Attribute-based zero-shot learning is to use attributes as an intermediate layer to transfer knowledge from visible classes to invisible classes. Because semantic attributes are a set of vectors that describe the characteristics of category objects, such as "fur", "tail", "four-legged" and other descriptive words, the attributes are shared by visible and invisible classes Therefore, it can play the role of zero-shot learning intermediate layer. At present, attribute learning includes two aspects: binary attribute learning and relative attribute learning. ”; while the attribute value of the relative attribute is a continuous value, indicating the relative strength value of the attribute. The concept and setting of relative attributes are more in line with human cognition and actual conditions, and the corresponding classification effect, that is, the accuracy is better than that of binary attributes under the same circumstances. Therefore, for zero-sample classification, the model method of relative attributes is better.

对于基于属性的零样本学习的具体实现,目前主要是通过直接属性预测模型(DAP)和间接属性预测模型(IAP)这两种模型。DAP中的类别标签是由属性分类器直接预测得到的,而IAP中类别标签则是间接预测得到的。DAP和IAP模型最主要的区别是学习的分类器的不同,IAP需要学习多类分类器,而且其测试样本只可能被分给不可见类。而DAP只需学习一组属性分类器即可,而且它的测试样本可以被无限制预测为可见类和不可见类。For the specific implementation of attribute-based zero-shot learning, there are currently two models mainly through the direct attribute prediction model (DAP) and the indirect attribute prediction model (IAP). The category labels in DAP are directly predicted by the attribute classifier, while the category labels in IAP are predicted indirectly. The main difference between DAP and IAP models is the difference in the learned classifiers. IAP needs to learn multi-class classifiers, and its test samples can only be assigned to unseen classes. However, DAP only needs to learn a set of attribute classifiers, and its test samples can be predicted as seen and unseen classes without restriction.

零样本学习的难点在于测试样本类别与训练样本类别没有交叉,传统方法的分类结果往往会偏向于训练类别标签,导致了零样本学习的强偏问题。语义自编码器以语义属性作为中间层,图像底层特征为输入,使输出对输入特征进行重构,这就是将编码后的数据在原来的编码规则下能够尽可能恢复成原始数据,这就在一定程度上缓解了强偏问题。但是,由于图像存在全局特征和局部特征,在某些场景下还存在噪声问题,所以,图像的底层特征中并不是所有的维度都对某一属性的学习起到积极作用,那些干扰性的特征维度会影响属性学习的准确性,从而导致影响零样本图像分类性能。The difficulty of zero-shot learning is that there is no intersection between the test sample category and the training sample category. The classification results of traditional methods are often biased towards the training category label, which leads to the strong bias problem of zero-shot learning. The semantic self-encoder uses the semantic attributes as the middle layer, and the underlying features of the image as the input, so that the output can reconstruct the input features, which means that the encoded data can be restored to the original data as much as possible under the original encoding rules, which is in To a certain extent, the problem of strong bias is alleviated. However, due to the existence of global and local features in the image, and noise problems in some scenarios, not all dimensions in the underlying features of the image play a positive role in the learning of a certain attribute. Those interfering features Dimensionality affects the accuracy of attribute learning, resulting in zero-shot image classification performance.

发明内容Contents of the invention

为了解决上述背景技术提到的技术问题,本发明提出了基于嵌入式特征选择语义自编码器的零样本图像分类方法。In order to solve the technical problems mentioned above in the background technology, the present invention proposes a zero-shot image classification method based on embedded feature selection semantic autoencoder.

为了实现上述技术目的,本发明的技术方案为:In order to realize above-mentioned technical purpose, technical scheme of the present invention is:

基于嵌入式特征选择语义自编码器的零样本图像分类方法,包括以下步骤:A zero-shot image classification method based on embedded feature selection semantic autoencoder, including the following steps:

(1)将特征的匹配选择与语义自编码器的训练进行融合,得到嵌入式特征选择语义自编码器的目标函数:(1) Fuse the matching selection of features with the training of semantic autoencoder to obtain the objective function of embedded feature selection semantic autoencoder:

其中,W为映射矩阵,上标T表示转置,X为样本的底层特征,S为相对属性向量,λ为编码部分的权重参数,为正则化参数;Among them, W is the mapping matrix, superscript T means transpose, X is the underlying feature of the sample, S is the relative attribute vector, λ is the weight parameter of the encoding part, is the regularization parameter;

(2)利用近端梯度下降法对步骤(1)得到的目标函数进行优化,得到优化后的嵌入式特征选择语义自编码器模型;(2) Using the proximal gradient descent method to optimize the objective function obtained in step (1), and obtain the optimized embedded feature selection semantic autoencoder model;

(3)在零样本图像分类的训练阶段,将训练样本的底层特征和相应的相对属性向量输入优化后的嵌入式特征选择语义自编码器模型,得到稀疏化的映射矩阵W;(3) In the training phase of zero-shot image classification, the underlying features of the training samples and the corresponding relative attribute vectors are input into the optimized embedded feature selection semantic autoencoder model to obtain a sparse mapping matrix W;

(4)在测试阶段,输入测试样本的底层特征以及步骤(3)得到的稀疏化的映射矩阵,预测出测试样本的相对属性向量;(4) In the testing phase, input the underlying features of the test sample and the sparse mapping matrix obtained in step (3), and predict the relative attribute vector of the test sample;

(5)根据步骤(4)得到的相对属性向量进行类别标签的分配。(5) Assign category labels according to the relative attribute vector obtained in step (4).

进一步地,在步骤(2)中,每一步的迭代方程如下:Further, in step (2), the iterative equation of each step is as follows:

上式中,Wk为W的第k步迭代值,令f′(Wk)为f(W)在第k步的一阶导数,L=-SXT+SSTW0+λW0XXT-λSXT,W0为与W维度相同的全1矩阵。In the above formula, W k is the kth iteration value of W, so that f′(W k ) is the first-order derivative of f(W) at step k, L=-SX T +SS T W 0 +λW 0 XX T -λSX T , W 0 is a matrix of all 1s with the same dimension as W .

进一步地,在步骤(3)中,首先利用折交叉验证随机在图像集中的挑选训练阶段的样本类别,则该图像集中剩下的类别作为测试集;然后将训练样本类别的底层特征及相应的相对属性向量输入优化后的嵌入式特征选择语义自编码器进行训练,得到稀疏化的映射矩阵。Further, in step (3), at first, use folded cross-validation to randomly select the sample category in the training stage in the image set, then the remaining categories in the image set are used as the test set; then the underlying features of the training sample category and the corresponding The embedded feature selection semantic autoencoder optimized with respect to attribute vector input is trained to obtain a sparse mapping matrix.

进一步地,将训练样本的相对属性向量和测试样本的相对属性向量均统计为高斯分布的形式。Further, the relative attribute vectors of the training samples and the relative attribute vectors of the test samples are both statistically in the form of Gaussian distribution.

进一步地,在步骤(5)中,分别计算训练样本的相对属性向量的均值和方差以及测试样本的相对属性向量的均值和方差,再通过最大后验概率进行类别标签的分类。Further, in step (5), the mean and variance of the relative attribute vectors of the training samples and the mean and variance of the relative attribute vectors of the test samples are respectively calculated, and then classify the category labels through the maximum posterior probability.

采用上述技术方案带来的有益效果:The beneficial effect brought by adopting the above-mentioned technical scheme:

本发明针对语义自编码器进行零样本分类时不能主动选择底层特征进行学习的问题,结合嵌入式特征选择,即在目标函数中添加一项L1范数正则项,提出了嵌入式特征选择语义自编码器,并且利用近端梯度下降法进行优化,改善了进行相对属性学习的正确率和最终零样本分类的精确度,提高了整体的性能。本发明可以用于涉及到领域漂移问题和带有强偏问题的零样本图像分类场景,也可以用在带有噪声背景的图像零样本分类中。The present invention aims at the problem that the semantic autoencoder cannot actively select the underlying features for learning when performing zero-sample classification, and combines embedded feature selection, that is, adding an L1 norm regularization item in the objective function, and proposes an embedded feature selection semantic autoencoder. The encoder is optimized using the proximal gradient descent method, which improves the accuracy of relative attribute learning and the accuracy of the final zero-sample classification, and improves the overall performance. The present invention can be used in zero-sample image classification scenarios involving domain drift and strong bias, and can also be used in image zero-sample classification with noise background.

附图说明Description of drawings

图1是本发明的整体流程图;Fig. 1 is the overall flowchart of the present invention;

图2是本发明中嵌入式特征选择语义自编码器结构图。Fig. 2 is a structural diagram of an embedded feature selection semantic autoencoder in the present invention.

具体实施方式Detailed ways

以下将结合附图,对本发明的技术方案进行详细说明。The technical solutions of the present invention will be described in detail below in conjunction with the accompanying drawings.

如图1所示,本发明设计了基于嵌入式特征选择语义自编码器的零样本图像分类方法,步骤如下:As shown in Figure 1, the present invention designs a zero-sample image classification method based on embedded feature selection semantic autoencoder, the steps are as follows:

步骤1:在语义自编码器的目标函数中添加一项L1范数正则化项,将特征的匹配选择与语义自编码器的训练融为一体,构成嵌入式特征选择语义自编码器,其目标函数如下:Step 1: Add an L1 norm regularization item to the objective function of the semantic autoencoder, integrate the matching selection of features with the training of the semantic autoencoder, and form an embedded feature selection semantic autoencoder, whose objective The function is as follows:

其中,W为映射矩阵,上标T表示转置,X为样本的底层特征,S为相对属性向量,λ为编码部分的权重参数,为正则化参数。Among them, W is the mapping matrix, superscript T means transpose, X is the underlying feature of the sample, S is the relative attribute vector, λ is the weight parameter of the encoding part, is a regularization parameter.

语义自编码器包括三层:输入层、中间层、输出层。其中输入层为图像的底层特征;中间层为语义属性层,也就是将语义属性作为中间层;输出层为语义属性层通过解码得出的。令输出与输入尽可能相同,这样就在一定程度上解决了零样本学习中的强偏问题。The semantic autoencoder consists of three layers: input layer, intermediate layer, and output layer. The input layer is the underlying feature of the image; the middle layer is the semantic attribute layer, that is, the semantic attribute is used as the middle layer; the output layer is obtained by decoding the semantic attribute layer. Make the output and input as identical as possible, which solves the problem of strong bias in zero-shot learning to a certain extent.

如图2所示,在原来语义自编码器的结构上加入一项L1范数正则项来约束映射矩阵W的训练过程,得到具有稀疏化的映射矩阵W。因为L1范数正则项的数学意义是对矩阵中所有元素求绝对值之和,因此求其最小值就能使矩阵中非零元素的个数尽可能少,就可以得到稀疏化的映射矩阵W。As shown in Figure 2, an L1 norm regularization term is added to the structure of the original semantic autoencoder to constrain the training process of the mapping matrix W, and a sparse mapping matrix W is obtained. Because the mathematical meaning of the L1 norm regular term is to calculate the sum of the absolute values of all elements in the matrix, so finding its minimum value can make the number of non-zero elements in the matrix as small as possible, and a sparse mapping matrix W can be obtained .

步骤2:利用近端梯度下降法对步骤1得到的目标函数进行优化。Step 2: Use the proximal gradient descent method to optimize the objective function obtained in step 1.

对于目标函数的前两项,令则其一阶导数f′(W)满足L-Lipschitz条件,则有:For the first two terms of the objective function, let Then its first-order derivative f′(W) satisfies the L-Lipschitz condition, then:

(f′(W2)-f′(W1))≤L(W2-W1)(f'(W 2 )-f'(W 1 ))≤L(W 2 -W 1 )

上式又满足:The above formula also satisfies:

(f′(W2)-f′(W1))=(f″(W)(W2-W1))≤L(W2-W1)(f'(W 2 )-f'(W 1 ))=(f"(W)(W 2 -W 1 ))≤L(W 2 -W 1 )

因此,由上述可知L的值即是f(W)的二阶导数的最大值,又因为其二阶导数为:Therefore, it can be seen from the above that the value of L is the maximum value of the second derivative of f(W), and because its second derivative is:

f″(W)=-SXT+SSTW+λWXXT-λSXT f″(W)=-SX T +SS T W+λWXX T -λSX T

又因为W矩阵的元素的值都是属于0和1之间的值,因此当W取全1矩阵时L能够取到最大值。设W0是上式中的W维度相同的全1矩阵,因此L的值为:And because the values of the elements of the W matrix are all values between 0 and 1, when W takes a matrix of all 1s, L can take the maximum value. Let W 0 be a matrix of all 1s with the same dimension as W in the above formula, so the value of L is:

L=-SXT+SSTW0+λW0XXT-λSXT L=-SX T +SS T W 0 +λW 0 XX T -λSX T

在Wk附近可将f(W)通过二阶泰勒展开式近似为:Around W k , f(W) can be approximated by the second-order Taylor expansion as:

其中,const是与W无关的常数,<,>表示内积,显然最小值在如下Wk+1获得:Among them, const is a constant that has nothing to do with W, <,> means the inner product, obviously the minimum value is obtained in the following W k+1 :

因此,能得到每一步的迭代:Therefore, the iterations for each step can be obtained:

可得:make Available:

经过化简和整理可得:After simplification and arrangement, we can get:

其中上标i表示Wk+1和Z的第i个分量。where the superscript i denotes the i-th component of W k+1 and Z.

步骤3:在零样本图像分类的训练阶段,将训练样本的底层特征和相应的相对属性向量输入步骤2优化结果所构成的模型,得到稀疏化的映射矩阵W。Step 3: In the training stage of zero-shot image classification, input the underlying features of the training samples and the corresponding relative attribute vectors into the model formed by the optimization results of Step 2, and obtain a sparse mapping matrix W.

首先利用折交叉验证随机在图像集中的挑选训练阶段的样本类别,则该图像集中剩下的类别作为测试集;然后将训练样本类别的底层特征及相应的相对属性向量输入优化后的嵌入式特征选择语义自编码器进行训练,得到稀疏化的映射矩阵。First, use folded cross-validation to randomly select the sample category in the training stage in the image set, and then use the remaining categories in the image set as the test set; then input the underlying features of the training sample category and the corresponding relative attribute vectors into the optimized embedded features The semantic autoencoder is selected for training to obtain a sparse mapping matrix.

步骤4:在测试阶段,输入测试样本的底层特征以及步骤3得到的稀疏化的映射矩阵,预测出测试样本的相对属性向量。Step 4: In the test phase, input the underlying features of the test sample and the sparse mapping matrix obtained in step 3, and predict the relative attribute vector of the test sample.

步骤5:根据步骤4得到的相对属性向量进行类别标签的分配。Step 5: Assign category labels according to the relative attribute vector obtained in step 4.

将与标签先验的相对属性向量统计为高斯分布,并计算其均值和方差;将预测得到的测试样本的属性向量值统计为高斯分布,并计算其均值和方差;根据得到的均值和方差进行最大后验概率分配类别标签。Statistically calculate the relative attribute vector with the label prior as a Gaussian distribution, and calculate its mean and variance; calculate the predicted attribute vector value of the test sample as a Gaussian distribution, and calculate its mean and variance; according to the obtained mean and variance Maximum a posterior probability assigns class labels.

实施例仅为说明本发明的技术思想,不能以此限定本发明的保护范围,凡是按照本发明提出的技术思想,在技术方案基础上所做的任何改动,均落入本发明保护范围之内。The embodiment is only to illustrate the technical idea of the present invention, and can not limit the scope of protection of the present invention with this. All technical ideas proposed in the present invention, any changes made on the basis of technical solutions, all fall within the scope of protection of the present invention .

Claims (5)

1.基于嵌入式特征选择语义自编码器的零样本图像分类方法,其特征在于,包括以下步骤:1. The zero-shot image classification method based on embedded feature selection semantic autoencoder, is characterized in that, comprises the following steps: (1)将特征的匹配选择与语义自编码器的训练进行融合,得到嵌入式特征选择语义自编码器的目标函数:(1) Fuse the matching selection of features with the training of semantic autoencoder to obtain the objective function of embedded feature selection semantic autoencoder: 其中,W为映射矩阵,上标T表示转置,X为样本的底层特征,S为相对属性向量,λ为编码部分的权重参数,为正则化参数;Among them, W is the mapping matrix, superscript T means transpose, X is the underlying feature of the sample, S is the relative attribute vector, λ is the weight parameter of the encoding part, is the regularization parameter; (2)利用近端梯度下降法对步骤(1)得到的目标函数进行优化,得到优化后的嵌入式特征选择语义自编码器模型;(2) Using the proximal gradient descent method to optimize the objective function obtained in step (1), and obtain the optimized embedded feature selection semantic autoencoder model; (3)在零样本图像分类的训练阶段,将训练样本的底层特征和相应的相对属性向量输入优化后的嵌入式特征选择语义自编码器模型,得到稀疏化的映射矩阵W;(3) In the training phase of zero-shot image classification, the underlying features of the training samples and the corresponding relative attribute vectors are input into the optimized embedded feature selection semantic autoencoder model to obtain a sparse mapping matrix W; (4)在测试阶段,输入测试样本的底层特征以及步骤(3)得到的稀疏化的映射矩阵,预测出测试样本的相对属性向量;(4) In the testing phase, input the underlying features of the test sample and the sparse mapping matrix obtained in step (3), and predict the relative attribute vector of the test sample; (5)根据步骤(4)得到的相对属性向量进行类别标签的分配。(5) Assign category labels according to the relative attribute vector obtained in step (4). 2.根据权利要求1所述基于嵌入式特征选择语义自编码器的零样本图像分类方法,其特征在于,在步骤(2)中,每一步的迭代方程如下:2. according to the described zero sample image classification method based on embedded feature selection semantic self-encoder of claim 1, it is characterized in that, in step (2), the iterative equation of each step is as follows: 上式中,Wk为W的第k步迭代值,令为f(W)在第k步的一阶导数,L=-SXT+SSTW0+λW0XXT-λSXT,W0为与W维度相同的全1矩阵。In the above formula, W k is the kth iteration value of W, so that is the first-order derivative of f(W) at step k, L=-SX T +SS T W 0 +λW 0 XX T -λSX T , W 0 is a matrix of all 1s with the same dimension as W. 3.根据权利要求1所述基于嵌入式特征选择语义自编码器的零样本图像分类方法,其特征在于,在步骤(3)中,首先利用折交叉验证随机在图像集中的挑选训练阶段的样本类别,则该图像集中剩下的类别作为测试集;然后将训练样本类别的底层特征及相应的相对属性向量输入优化后的嵌入式特征选择语义自编码器进行训练,得到稀疏化的映射矩阵。3. the zero-shot image classification method based on embedded feature selection semantic autoencoder according to claim 1, is characterized in that, in step (3), at first utilize folding cross-validation to randomly select the sample of training stage in image set category, the remaining categories in the image set are used as the test set; then the underlying features of the training sample category and the corresponding relative attribute vectors are input into the optimized embedded feature selection semantic autoencoder for training, and a sparse mapping matrix is obtained. 4.根据权利要求1所述基于嵌入式特征选择语义自编码器的零样本图像分类方法,其特征在于,将训练样本的相对属性向量和测试样本的相对属性向量均统计为高斯分布的形式。4. The zero-shot image classification method based on an embedded feature selection semantic autoencoder according to claim 1, wherein the relative attribute vectors of the training samples and the relative attribute vectors of the test samples are all counted as a form of Gaussian distribution. 5.根据权利要求4所述基于嵌入式特征选择语义自编码器的零样本图像分类方法,其特征在于,在步骤(5)中,分别计算训练样本的相对属性向量的均值和方差以及测试样本的相对属性向量的均值和方差,再通过最大后验概率进行类别标签的分类。5. the zero-shot image classification method based on embedded feature selection semantic autoencoder according to claim 4, is characterized in that, in step (5), calculate the mean value and the variance of the relative attribute vector of training sample and test sample respectively The mean and variance of the relative attribute vector of , and then classify the category labels through the maximum posterior probability.
CN201910566369.1A 2019-06-27 2019-06-27 The zero sample image classification method based on embedded feature selecting semanteme self-encoding encoder Pending CN110427967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910566369.1A CN110427967A (en) 2019-06-27 2019-06-27 The zero sample image classification method based on embedded feature selecting semanteme self-encoding encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910566369.1A CN110427967A (en) 2019-06-27 2019-06-27 The zero sample image classification method based on embedded feature selecting semanteme self-encoding encoder

Publications (1)

Publication Number Publication Date
CN110427967A true CN110427967A (en) 2019-11-08

Family

ID=68409698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910566369.1A Pending CN110427967A (en) 2019-06-27 2019-06-27 The zero sample image classification method based on embedded feature selecting semanteme self-encoding encoder

Country Status (1)

Country Link
CN (1) CN110427967A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914872A (en) * 2020-06-04 2020-11-10 西安理工大学 A Zero-Shot Image Classification Method Fusion of Labeling and Semantic Autocoding
CN113221814A (en) * 2021-05-26 2021-08-06 华瑞新智科技(北京)有限公司 Road traffic sign identification method, equipment and storage medium
CN114005005A (en) * 2021-12-30 2022-02-01 深圳佑驾创新科技有限公司 Double-batch standardized zero-instance image classification method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310862A1 (en) * 2014-04-24 2015-10-29 Microsoft Corporation Deep learning for semantic parsing including semantic utterance classification
CN106203472A (en) * 2016-06-27 2016-12-07 中国矿业大学 A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes
US20170127016A1 (en) * 2015-10-29 2017-05-04 Baidu Usa Llc Systems and methods for video paragraph captioning using hierarchical recurrent neural networks
US20180165554A1 (en) * 2016-12-09 2018-06-14 The Research Foundation For The State University Of New York Semisupervised autoencoder for sentiment analysis
CN108564121A (en) * 2018-04-09 2018-09-21 南京邮电大学 A kind of unknown classification image tag prediction technique based on self-encoding encoder
CN108921226A (en) * 2018-07-11 2018-11-30 广东工业大学 A kind of zero sample classification method based on low-rank representation and manifold regularization
CN109492662A (en) * 2018-09-27 2019-03-19 天津大学 A kind of zero sample classification method based on confrontation self-encoding encoder model
CN109829299A (en) * 2018-11-29 2019-05-31 电子科技大学 A kind of unknown attack recognition methods based on depth self-encoding encoder

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310862A1 (en) * 2014-04-24 2015-10-29 Microsoft Corporation Deep learning for semantic parsing including semantic utterance classification
US20170127016A1 (en) * 2015-10-29 2017-05-04 Baidu Usa Llc Systems and methods for video paragraph captioning using hierarchical recurrent neural networks
CN106203472A (en) * 2016-06-27 2016-12-07 中国矿业大学 A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes
US20180165554A1 (en) * 2016-12-09 2018-06-14 The Research Foundation For The State University Of New York Semisupervised autoencoder for sentiment analysis
CN108564121A (en) * 2018-04-09 2018-09-21 南京邮电大学 A kind of unknown classification image tag prediction technique based on self-encoding encoder
CN108921226A (en) * 2018-07-11 2018-11-30 广东工业大学 A kind of zero sample classification method based on low-rank representation and manifold regularization
CN109492662A (en) * 2018-09-27 2019-03-19 天津大学 A kind of zero sample classification method based on confrontation self-encoding encoder model
CN109829299A (en) * 2018-11-29 2019-05-31 电子科技大学 A kind of unknown attack recognition methods based on depth self-encoding encoder

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
CLVSIT: "特征选择-嵌入式选择", 《CSDN》 *
ELYOR KODIROV ET AL.: "Semantic Autoencoder for Zero-Shot Learning", 《ARXIV》 *
ELYOR KODIROV ET AL.: "unsupervised domain adaptation for zero-shot learning", 《ICCV"15》 *
TEN_YN: "近端梯度下降算法(Proximal Gradient Algorithm)", 《HTTPS://BLOG.CSDN.NET/QQ_38290475/ARTICLE/DETAILS/81052206》 *
YANG LIU ET AL.: "Zero Shot Learning via Low-rank Embedded Semantic AutoEncoder", 《PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE》 *
冯兴东: "《分布式统计计算》", 30 April 2018, 上海财经大学出版社 *
巩萍等: "基于属性关系图正则化特征选择的零样本分类", 《中国矿业大学学报》 *
褚宝增等: "《现代数学地质》", 31 August 2014, 中国科学技术出版社 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914872A (en) * 2020-06-04 2020-11-10 西安理工大学 A Zero-Shot Image Classification Method Fusion of Labeling and Semantic Autocoding
CN111914872B (en) * 2020-06-04 2024-02-02 西安理工大学 Zero sample image classification method with label and semantic self-coding fused
CN113221814A (en) * 2021-05-26 2021-08-06 华瑞新智科技(北京)有限公司 Road traffic sign identification method, equipment and storage medium
CN114005005A (en) * 2021-12-30 2022-02-01 深圳佑驾创新科技有限公司 Double-batch standardized zero-instance image classification method
CN114005005B (en) * 2021-12-30 2022-03-22 深圳佑驾创新科技有限公司 Double-batch standardized zero-instance image classification method

Similar Documents

Publication Publication Date Title
She et al. Text classification based on hybrid CNN-LSTM hybrid model
Grcić et al. Densely connected normalizing flows
CN107169035B (en) A Text Classification Method Hybrid Long Short-Term Memory Network and Convolutional Neural Network
CN107330446B (en) An optimization method of deep convolutional neural network for image classification
CN107885853A (en) A kind of combined type file classification method based on deep learning
CN109271522A (en) Comment sensibility classification method and system based on depth mixed model transfer learning
CN105184298A (en) Image classification method through fast and locality-constrained low-rank coding process
CN112488241B (en) Zero sample picture identification method based on multi-granularity fusion network
CN106778853A (en) Unbalanced data sorting technique based on weight cluster and sub- sampling
CN107679526A (en) A kind of micro- expression recognition method of face
CN110516718A (en) Zero-shot learning method based on deep embedding space
CN110197205A (en) A kind of image-recognizing method of multiple features source residual error network
CN108051660A (en) A kind of transformer fault combined diagnosis method for establishing model and diagnostic method
CN110427967A (en) The zero sample image classification method based on embedded feature selecting semanteme self-encoding encoder
CN110414665A (en) A network representation learning method based on deep neural network
CN114925205B (en) GCN-GRU text classification method based on contrastive learning
Cai et al. Scene-adaptive vehicle detection algorithm based on a composite deep structure
CN105334504A (en) Radar target identification method based on large-boundary nonlinear discrimination projection model
CN111242196A (en) Differential privacy protection method for interpretable deep learning
WO2023088174A1 (en) Target detection method and apparatus
Blot et al. Shade: Information-based regularization for deep learning
CN111859936A (en) A cross-domain filing-oriented legal document professional jurisdiction identification method based on deep hybrid network
CN108985378B (en) A Domain Adaptation Method Based on Hybrid Cross Deep Network
Chen et al. Military image scene recognition based on CNN and semantic information
Okokpujie et al. Predictive modeling of trait-aging invariant face recognition system using machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination