CN117290767A - Transformer fault detection method system and equipment based on similarity and contrast learning - Google Patents
Transformer fault detection method system and equipment based on similarity and contrast learning Download PDFInfo
- Publication number
- CN117290767A CN117290767A CN202311139995.5A CN202311139995A CN117290767A CN 117290767 A CN117290767 A CN 117290767A CN 202311139995 A CN202311139995 A CN 202311139995A CN 117290767 A CN117290767 A CN 117290767A
- Authority
- CN
- China
- Prior art keywords
- fault
- transformer
- similarity
- node
- transformer fault
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 56
- 238000000034 method Methods 0.000 claims abstract description 49
- 238000012163 sequencing technique Methods 0.000 claims abstract description 10
- 238000013528 artificial neural network Methods 0.000 claims abstract description 7
- 230000004927 fusion Effects 0.000 claims abstract description 4
- 238000010586 diagram Methods 0.000 claims description 48
- 239000011159 matrix material Substances 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 5
- 238000011160 research Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 2
- 230000008447 perception Effects 0.000 claims description 2
- 238000004891 communication Methods 0.000 claims 1
- 238000003745 diagnosis Methods 0.000 abstract description 23
- 239000007789 gas Substances 0.000 description 6
- 238000004868 gas analysis Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000002405 diagnostic procedure Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Supply And Distribution Of Alternating Current (AREA)
Abstract
The invention relates to the field of transformer fault detection, in particular to a transformer fault detection method, system and equipment based on similarity and contrast learning. The method comprises the following steps: constructing a graph network based on the transformer fault samples; extracting multi-view fusion characteristics of a transformer sample: learning node representations in similar K neighbor graphs through a low-pass filter, learning the node representations of dissimilar K neighbor graphs through a high-pass filter, and finally fusing to obtain final node representations; optimizing transformer sample characteristics; and classifying the faults of the transformer, and classifying the learned node representations through a classifier. The method comprehensively considers the similarity between the fault data of the same type of transformers, the difference between the fault data of different types of transformers and the inconsistent mode among the fault types of the transformers, obtains the representation of the fault characteristics of the transformers through a graph neural network and sequencing comparison learning, and aims to optimize the fault diagnosis problem of the transformers.
Description
Technical Field
The invention relates to the field of transformer fault detection, in particular to a transformer fault detection method, system and equipment based on similarity and contrast learning.
Background
Transformers play a critical role in power transmission and distribution, and any fault may have a serious impact on the stability and reliability of the grid. Transformer fault diagnosis is an important aspect of power system maintenance and management. The faults of the transformer can cause safety accidents such as short circuit, fire disaster and the like of the power system, and the safety accidents cause serious threat to personnel and equipment. The oil quality test is one of the most common tests for evaluating transformer conditions. The oil quality test identifies faults by analyzing various gases dissolved in the transformer, i.e., dissolved gas analysis. There are two main methods of analysis of dissolved gases: interpretation-based methods and artificial intelligence-based methods. The interpretation model-based transformer faults can be subdivided into two transformer fault diagnosis methods based on expert systems and based on rule systems.
The transformer fault diagnosis based on the expert system is a method based on expert knowledge and rules. By collecting expert domain knowledge and encoding it into a series of rules or decision trees, faults of the transformer are diagnosed based on observed faults or measurement data. Expert systems generally have good interpretability and can provide detailed diagnostic procedures and reasoning grounds. However, the accuracy of expert systems is limited by the quality and applicability of rules and cannot accommodate new or complex failure modes. At the same time, expert systems are also limited by the capabilities and expertise of expert students, making the system difficult to maintain and update.
Transformer fault diagnosis based on a rule system diagnoses transformer faults based on observed symptoms or measurement data. Incomplete or noisy data can be processed based on a rule system that is easy to implement and interpret. Due to its simplicity and ease of implementation, these systems find widespread use in transformer fault diagnosis. But rule systems rely on predefined rules or decision trees to capture all possible failure modes.
In recent years, the amount of manpower and material resources required for obtaining the soluble gas in the oil is small, and meanwhile, the detection result is accurate. Most research has focused on developing machine models based on dissolved gas analysis data, with the desire to accurately classify transformer fault types and severity. Among these, a more efficient method is to use a deep learning algorithm, such as a model of convolutional neural network and cyclic neural network. Other studies explore the application of hybrid artificial intelligence systems, using a variety of artificial intelligence techniques, to improve the accuracy of fault diagnosis. For example, a hybrid system combining fuzzy logic and artificial neural networks to improve fault diagnosis accuracy based on dissolved gas analysis data. Another area of research has focused on integrating dissolved gas analysis data with other diagnostic data (e.g., vibration analysis and oil quality analysis) using data fusion techniques. By the method, complementary information of a plurality of data sets can be acquired, and accuracy and reliability of fault diagnosis can be improved.
However, despite the breakthroughs in analyzing transformer fault detection based on dissolved gases using artificial intelligence, there is a central problem that has not been substantially solved: under the condition of high similarity among transformer fault data, the probability of misdiagnosis of the transformer is high, and how to learn the characteristics of each transformer sample and the differences among fault categories better, so that the reduction of the number of misdiagnosis times of the transformer fault diagnosis becomes a critical problem to be solved.
Disclosure of Invention
In view of the above, the invention provides a method, a system and a device for detecting faults of a transformer based on similarity and contrast learning, which comprehensively consider the similarity between fault data of the same type of transformers and the difference between fault data of different types of transformers and the inconsistent mode among fault types of the transformers, learn the representation of fault nodes of the transformers through a graph neural network and aim to optimize the fault diagnosis problem of the transformers.
In order to achieve the above purpose, the specific technical scheme is as follows:
the first aspect provides a transformer fault detection method based on similarity and contrast learning, comprising: calculating the similarity of soluble data in the transformer fault sample oil by adopting cosine similarity, and determining the similarity among the transformer fault samples; based on the similarity value, connecting each transformer fault sample with K most similar transformer fault samples, and constructing a similar KNN diagram; simultaneously connecting each transformer fault sample with K least similar transformer fault samples to construct a dissimilar KNN diagram; learning a transformer fault node representation of a similar KNN diagram by constructing a low-pass filter for transformer fault detection; by constructing a high-pass filter for transformer fault detection, learning transformer fault node representations of dissimilar KNN graphs; the method comprises the steps of obtaining differences among fault categories of a transformer by using fault node representation of the transformer, realizing inconsistent modes among different categories of faults of the transformer by using a sequencing comparison learning method, and optimizing the fault node representation of the transformer; and representing the optimized transformer fault node, determining the transformer fault type by using a multi-layer perception classifier, and taking the fault type with the maximum probability value as the sample fault type.
With reference to the first aspect, in some possible embodiments, the calculating the similarity of the soluble data in the transformer fault sample oil using cosine similarity includes the following steps:
wherein sm is i,j A cosine similarity value representing an i-th transformer fault node and a j-th transformer fault node,the characteristic vector of the fault node of the ith transformer and the characteristic vector of the fault node of the jth transformer are respectively represented, and d represents the characteristic dimension of the fault node.
With reference to the first aspect, in some possible embodiments, the constructing steps of the similar KNN graph and the dissimilar KNN graph include:
sorting based on similarity values among fault samples of the transformer from large to small;
selecting the first K sequenced samples of each transformer fault sample as neighbor construction similarity KNN graphs;
each transformer fault sample is connected with the last K samples after sequencing to form a dissimilar KNN diagram.
With reference to the first aspect, in some possible implementations, the learning of the transformer fault node representation of the similar KNN graph is as follows:
wherein,is a symmetrical normalized adjacency matrix of a similar diagram in transformer detection, L epsilon { 1..L } is the number of layers of a neural network, MLP represents a multi-layer perceptron, and the representation of the last layer is +.>Representation matrix H, considered as transformer sample nodes in a similarity graph (hm) 。
With reference to the first aspect, in some possible implementations, the learning the transformer fault node of the dissimilar KNN graph is formulated as:
wherein, L is { 1..L },is a symmetric normalized Laplacian matrix of dissimilarity map in transformer detection, alpha is a super parameter of a high-pass filter for controlling transformer fault detection, and the representation of the last layer is +.>Representation matrix H, considered as transformer sample nodes in dissimilar diagrams (ht) 。
With reference to the first aspect, in some possible implementations, the step of optimizing the transformer fault node representation includes:
calculating the similarity degree among fault categories of the transformer through cosine similarity:
wherein s is i,j The similarity degree of the ith fault and the jth fault is shown, i, j respectively shows the ith fault and the jth fault, n i Indicating the number of samples contained in the i-th fault, n j Indicating the number of samples contained in the j-th class of fault,feature matrix representing all transformer fault nodes of fault class i +.>The characteristic matrix of all the transformer fault nodes with the fault class of j is represented, and d represents the characteristic dimension of the transformer fault nodes;
the similarity degree of each fault category and other fault categories is ordered from large to small, and a positive sample set P of orderly transformer fault nodes is constructed 1 ,...,P r And satisfies:
wherein s (·) represents a cosine formula, h represents a target node, h i Representing a node of a certain research in the positive sample set, h n For a certain sample node in the negative sample set, P i Represents the i-th positive sample set, N represents the negative sample set;
the InfoNCE penalty is computed recursively until there is no positive sample set, formalized defined asWherein l i The calculation mode of (2) is as follows:
wherein τ i Represents temperature, and τ i <τ i+1 Punishment of faults with high similarity, h p For positive sample set node representation, h n For node representation in negative sample set, P i Represents the i-th positive sample set, N represents the negative sample set;
the objective function of the optimized transformer fault detection module is as follows:
wherein y is i A real label representing a fault node of the transformer,the predicted labels are represented for the transformer fault nodes according to the learning.
With reference to the first aspect, in some possible implementations, the determining a transformer fault class using a multi-layer perceptual classifier is formally defined as:
wherein,representing the final predicted fault class of input sample i, h i Representing a representation of the input transformer sample i learned by the model.
A second aspect provides a transformer fault detection system based on similarity and contrast learning, comprising:
and an extraction module: means configured to extract a characteristic of dissolved gas in transformer oil;
a graph network module: the method comprises the steps of configuring the transformer fault data to calculate the relation between transformer fault data based on cosine similarity and constructing similar and dissimilar KNN diagrams;
a multi-view fusion feature module: the method comprises the steps of merging node representations in a low-pass filter learning similar KNN diagram and node representations in a high-pass filter learning dissimilar KNN diagram as final node representations;
optimizing a transformer sample feature module: the method comprises the steps of configuring to realize learning inconsistent modes among different types of faults of the transformer by using a sequencing-comparison learning method, and optimizing transformer fault node representation;
a transformer fault classification module: configured to determine a transformer fault category using a multi-layer perceptual classifier;
and a display module: the voice broadcasting and display screen is configured.
A third aspect provides a transformer fault detection device based on similarity and contrast learning, the device being communicatively coupled to a transformer fault detection system based on similarity and contrast learning.
A fourth aspect provides a computer readable medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements the method according to the first aspect.
Compared with the prior art, the invention has the beneficial effects that:
according to the transformer fault detection method based on the side heterogeneity and the contrast learning, common dissolved gas analysis data are constructed into graphs with similar sides and different sides, the similarity and difference relation between the dissolved gas analysis data of the transformer is captured through a low-pass filter and a high-pass filter in the graph representation learning, and the effective characteristics of a transformer sample are learned from local and global angles, so that the accuracy of transformer fault detection is improved. And secondly, adopting a sorting comparison loss to capture inconsistent modes among different fault categories of the transformer, and solving the problem of fuzzy boundary of each fault category in the analysis data of the dissolved gas of the transformer. Finally, the learned representation is classified by a classification model for diagnosing the transformer fault. The model can solve the problem of low accuracy of transformer fault detection caused by large similarity among fault data and small difference among fault categories in transformer fault diagnosis.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic flow diagram of similar and dissimilar K-nearest neighbor diagrams;
FIG. 3 is a schematic representation of an optimized transformer fault node;
FIG. 4 is a schematic diagram of constructing a transformer fault detection scheme;
FIG. 5 is a schematic diagram of a constructed transformer fault signature learner;
FIG. 6 is a schematic diagram of the order of failed nodes of a build transformer versus loss;
fig. 7 is a schematic diagram of constructing a transformer fault node classifier.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The following is an exemplary method of transformer fault detection based on similarity and contrast learning as shown in figure 1,
step one: calculating the similarity of soluble data in the transformer fault sample oil by adopting cosine similarity, and determining the similarity among the transformer fault samples;
step two: constructing a similar KNN diagram by connecting edges of each transformer fault sample and K transformer fault samples which are most similar to each transformer fault sample based on the similarity value, and simultaneously constructing a dissimilar KNN diagram by connecting edges of each transformer fault sample and K transformer fault samples which are least similar to each transformer fault sample;
step three: constructing a low-pass filter for transformer fault detection, and learning transformer fault node representation of a similar KNN diagram;
step four: constructing a high-pass filter for transformer fault detection, and learning transformer fault node representations of dissimilar KNN graphs;
step five: the method comprises the steps of obtaining differences among fault categories of transformers, achieving inconsistent modes among faults of different categories of the transformers through a sequencing comparison learning method, and optimizing transformer fault node representation;
step six: and diagnosing the faults of the transformer samples, classifying the learned fault node representations of the transformers by using a multi-layer perceptron classifier, and taking the fault type with the largest probability value as the fault type of the sample.
When the method is used, the similarity of the soluble data in the transformer fault sample oil is calculated by adopting cosine similarity, the similarity between the transformer fault samples is determined, the similarity between the transformer fault samples can be measured more accurately, and the accuracy of fault detection is improved. And constructing a similar KNN diagram by connecting edges of each transformer fault sample and K most similar transformer fault samples based on the similarity value, and simultaneously constructing a dissimilar KNN diagram by connecting edges of each transformer fault sample and K least similar transformer fault samples, so that similarity and dissimilarity information between the transformer fault samples can be fully utilized, and the sensitivity and the robustness of fault detection are improved. The low-pass filter and the high-pass filter are constructed, so that the fault node representation of the transformer in different frequency ranges can be learned, and the extraction capability of fault features is improved. And obtaining differences among fault categories of the transformers, and using a sequencing comparison learning method to learn inconsistent modes among faults of different categories of the transformers so as to optimize the representation of fault nodes of the transformers. By using the sorting comparison learning method, inconsistent modes among different types of faults of the transformer can be learned, and the accuracy of fault classification is improved. And performing fault diagnosis on the transformer samples, classifying the learned transformer fault node representations by using a multi-layer perceptron classifier, and taking the fault type with the largest probability value as the sample fault type, so that the accurate classification of the transformer samples can be realized, and the efficiency and reliability of fault diagnosis are improved.
In summary, the method comprehensively utilizes technologies such as similarity, contrast learning, multi-layer perceptron classifier and the like, can effectively detect and diagnose faults of the transformer, and improves reliability and safety of transformer equipment.
In other embodiments, in step one, the similarity of the soluble data in the oil of the fault sample of the transformer is calculated by using cosine similarity, and the similarity between the fault samples of the transformer is determined, wherein the calculating steps are as follows:
wherein sm is i,j A cosine similarity value representing an i-th transformer fault node and a j-th transformer fault node,the characteristic vector of the fault node of the ith transformer and the characteristic vector of the fault node of the jth transformer are respectively represented, and d represents the characteristic dimension of the fault node.
In this embodiment, the cosine similarity calculation method can effectively measure the similarity between the fault samples of the transformer, is not affected by the feature dimension, and can determine the similarity degree between different fault samples by calculating the similarity between the fault samples of the transformer, so as to provide a basis for subsequent fault detection and diagnosis. The cosine similarity calculation method is simple and high in calculation efficiency, and is suitable for processing and analyzing large-scale transformer fault data.
In other embodiments, in the second step, a transformer fault detection map is constructed. The method comprises the following specific operations that based on the similarity value, each transformer fault sample and K samples with the largest cosine similarity value are connected to form a similar KNN diagram, and meanwhile, each transformer fault sample and K samples with the smallest cosine similarity value are connected to form a dissimilar KNN diagram.
As shown in fig. 2, the steps are as follows: sorting the similarity values among the transformer fault samples calculated in the first step from large to small; and selecting the first K samples after the sorting of each transformer fault sample as neighbor construction similarity KNN graphs of the samples, and constructing dissimilar KNN graphs by connecting edges of each transformer fault sample and the last K samples after the sorting.
In this embodiment, first, according to the similarity values between the fault samples of the transformers calculated in the first step, the similarity values are sorted from large to small, and for each fault sample of the transformers, the first K sorted samples are selected as their neighbors, so as to construct a similar KNN graph. That is, the sample is bordered by its K most similar samples to form a sub-graph centered on the sample. And simultaneously, connecting edges of each transformer fault sample and the last K samples after sequencing, and constructing a dissimilar KNN diagram. That is, the sample is bordered by its K least similar samples to form a sub-graph centered on the sample. And constructing a similar KNN diagram and a dissimilar KNN diagram, and extracting relation information between the transformer fault samples to form a sub-diagram with local similarity and dissimilarity. Thus, the characteristics of the transformer fault sample can be more accurately represented, and the subsequent fault classification and diagnosis can be facilitated. The similar KNN graph and the dissimilar KNN graph may provide a connection relationship between the transformer fault samples, helping to find commonalities and differences between the fault samples. The method can provide more comprehensive information for fault detection and diagnosis, and improve the accuracy and reliability of fault diagnosis. By selecting an appropriate K value, the degree of sparseness of the similar KNN map and the dissimilar KNN map can be controlled. Therefore, unnecessary calculation and storage expenses can be reduced and the algorithm efficiency can be improved while key information is kept. The construction process of the similar KNN diagram and the dissimilar KNN diagram is simple and visual, and easy to understand and realize. Meanwhile, the method can be combined with other classification and diagnosis algorithms to provide more information support.
In other embodiments, in the third step, the transformer fault node expression for learning the similar KNN graph is expressed as:
wherein,is a symmetrical normalized adjacency matrix of a similar diagram in transformer detection, L epsilon { 1..L } is the number of layers of a neural network, MLP represents a multi-layer perceptron, and the representation of the last layer is +.>Representation matrix H, considered as transformer sample nodes in a similarity graph (hm) 。
In this embodiment, by constructing a similar KNN graph and learning node representations, a richer feature representation may be obtained using relationship information between transformer fault samples. Compared with the traditional feature extraction method, the method can better capture the similarity and the difference between samples, and improves the accuracy of fault detection. The use of low pass filters may limit the spectral range of the node representation so that the node representation is smoother. This helps to reduce the effects of noise and improve the stability and reliability of the feature. Through learning node representation, the characteristics of the transformer fault samples can be reduced and abstracted, and the dimension and redundant information of the characteristics are reduced. This helps reduce computational and storage overhead and improves the efficiency and scalability of the model.
In other embodiments, in the fourth step, the transformer fault node for learning the dissimilar KNN graph is expressed as:
wherein, L is { 1..L },is a symmetric normalized Laplacian matrix of dissimilarity map in transformer detection, alpha is a super parameter of a high-pass filter for controlling transformer fault detection, and the representation of the last layer is +.>Representation of transformer sample nodes in dissimilar diagramsMatrix H (ht) 。
In the embodiment, by constructing dissimilar KNN graphs and learning node representation, the differential information between the transformer fault samples can be fully utilized, and more accurate characteristic representation can be obtained. Compared with the traditional feature extraction method, the method can better capture the difference between samples, and improves the accuracy of fault detection. The use of a high pass filter may emphasize high frequency information in the node representation and help highlight abnormal features in the transformer fault samples. This helps to increase the sensitivity of fault detection, enabling better capture of fine fault signals. Through learning node representation, the characteristics of the transformer fault samples can be reduced and abstracted, and the dimension and redundant information of the characteristics are reduced. This helps reduce computational and storage overhead and improves the efficiency and scalability of the model.
In other embodiments, as shown in fig. 3, in the fifth step, differences between fault classes of the transformer are captured, and inconsistent patterns between faults of different classes of the transformer are obtained by using a sequencing-comparison learning method, which includes the following steps:
A. calculating the similarity degree among fault categories of the transformer through cosine similarity:
wherein s is i,j The similarity degree of the ith fault and the jth fault is shown, i, j respectively shows the ith fault and the jth fault, n i Indicating the number of samples contained in the i-th fault, n j Indicating the number of samples contained in the j-th class of fault,feature matrix representing all transformer fault nodes of fault class i +.>Feature matrix of all transformer fault nodes with fault class j, d represents transformer and thereforeFeature dimensions of the barrier nodes.
B. The similarity degree of each fault category and other fault categories is ordered from large to small, and a positive sample set P of orderly transformer fault nodes is constructed 1 ,...,P r And satisfies:
wherein s (·) represents a cosine formula, h represents a target node, h i Representing a node of a certain research in the positive sample set, h n For a certain sample node in the negative sample set, P i Represents the i-th positive sample set, and N represents the negative sample set.
C. In order to calculate the infonnce loss in a recursive manner using the hierarchical information presented between the transformer fault categories, the remaining positive sample set of fault categories is taken as negative samples starting from the positive sample set of the first set of transformer fault nodes. The current positive sample set is then discarded, the next positive sample set of the fault class is moved to and taken as a positive sample set, the positive sample sets of the other transformer fault classes are taken as negative sample sets, and the process is repeated until no positive sample set exists. Formalized definition asWherein l i The calculation mode of (2) is as follows:
wherein τ i Represents temperature, and τ i <τ i+1 Punishment of faults with high similarity, h p For positive sample set node representation, h n For node representation in negative sample set, P i Represents the i-th positive sample set, and N represents the negative sample set.
D. The main objective functions of the optimized transformer fault detection module are as follows:
wherein y is i A real label representing a fault node of the transformer,the predicted labels are represented for the transformer fault nodes according to the learning.
In this embodiment, by learning the inconsistent patterns among different types of faults of the transformer, the differences among the different fault types can be better distinguished. The method is beneficial to improving the accuracy and reliability of fault detection and can judge the fault type of the transformer more accurately. The rank-contrast learning method can effectively utilize the relative relationship between samples, not just rely on the absolute eigenvalues of the samples. This makes the model more robust and enables efficient learning with different data distributions and noise. Through optimizing node representation, the characteristics of different types of fault samples of the transformer can be better distinguished and abstracted, redundant information of the characteristics is reduced, and the efficiency and expandability of the model are improved.
In other embodiments, as shown in fig. 7, in step six, the determining the transformer fault class using the multi-layer perceptron classifier is formally defined as:
wherein,representing the final predicted fault class of input sample i, h i Representing a representation of the input transformer sample i learned by the model.
In the embodiment, the learned fault node representation of the transformer is used as the input characteristic, so that the fault characteristic of a transformer sample can be better reflected, and the accuracy and reliability of fault diagnosis are improved. The multi-layer perceptron classifier has strong nonlinear modeling capability and can learn complex fault modes and relations. The model can adapt to different types of fault samples, and has good generalization capability. The multi-layer perceptron classifier has higher training and deducing efficiency and is suitable for large-scale data sets and real-time fault diagnosis scenes. The technology can be combined with other characteristic engineering and model optimization methods to further improve the performance of fault diagnosis.
In summary, the implementation of this technique is:
and constructing a transformer fault detection diagram. As shown in fig. 4, the maximum K and minimum K samples of each transformer sample in the similarity matrix are used as neighbor nodes for constructing the similarity and dissimilarity K-neighbor graphs, respectively, by calculating the transformer fault sample data similarity matrix using cosine similarity.
And constructing a transformer fault characteristic learner. As shown in fig. 5, the transformer fault signature learner includes two modules: the low-pass filter and the high-pass filter are a transformer fault node similarity characteristic learning module and a transformer fault node difference characteristic learning module. The low-pass filter learns node representation in the transformer-like KNN diagram by using commonly used graph convolution operation, specifically, the k-th layer representation of node i in the transformer-like KNN diagram is represented by the k-1-th layer representation of node i in the transformer-like KNN diagram and the degree d between the k-1-th layer representation and the node i After linear transformation, adding the sum of the representations of the node i after linear transformation on the upper layer of all neighbor nodes in the similar KNN graph, and defining as:
wherein,the kth layer representing node i in the transformer-like KNN diagram represents +.>And->K-1 representation, d, of neighbor node j in the respective node i and its similar KNN graph i And d j Respectively representing the degrees of a node i and a neighbor node j in a transformer similar KNN diagram, a ij The importance of node j to node i is represented, herein as the cosine similarity value of node j and node i.
The high-pass filter for transformer fault diagnosis adopts the inverse operation of the low-pass filter to obtain the difference information among the fault nodes of the transformer, and the specific operation is as follows:
wherein,k-th layer representation of node i in the KNN diagram representing dissimilar transformers, +.>And->K-1 representation of neighbor node j in separate node i and dissimilar KNN graph, d i And d j Respectively representing the degrees, a, of a node i and a neighbor node j in a transformer dissimilar KNN diagram ij The importance of node j and node i is represented, and the cosine similarity values of node j and node i are represented herein.
The similar representation and the differential representation of the fault nodes of the transformer are respectively obtained through the low-pass filter and the high-pass filter, and in order to further optimize the representation of the fault nodes of the transformer and improve the accuracy of fault detection of the transformer, sequencing comparison learning needs to be carried out on the nodes between different faults which are fused with the representation of the two view nodes.
And constructing the sorting comparison loss of the fault nodes of the transformer. As shown in fig. 6, during training, the similarity representation of the transformer fault node in the similar KNN view is learned via the low pass filter and the difference representation of the transformer fault node in the dissimilar KNN view is learned via the high pass filter. In order to obtain effective information in the fault characteristics of the transformer, firstly, calculating the similarity among faults of the transformer by cosine similarity according to label information of fault samples of the transformer, sequencing the similarity among all fault categories and the fault category from large to small, and constructing an orderly positive sample set, so that the comparison loss of nodes among different fault types can be defined as;
wherein l i The calculation mode of (2) is as follows:
where s (·) represents the similarity score, τ i Represents temperature, and τ i <τ i+1 Punishment of faults with high similarity, h p Is represented by positive sample node, h n Represented by negative sample nodes, P i Represents the i-th positive sample set, and N represents the negative sample set.
Constructing a transformer fault node classifier, as shown in fig. 7, wherein the transformer fault classification module takes as input a final representation of a transformer fault node output from the transformer fault representation learning module, namely Layer 1 of the MLP classifier in fig. 7; then linearly converting the node representation of the Layer 1 into the hidden Layer representation of the Layer 2 through linear transformation; the ReLU activation function converts the hidden Layer representation of Layer 2 into the output of Layer 3; and finally taking the output of the MLP classifier as a sample to detect faults. In order to reduce the difference between the predicted transformer fault class and the true class, and improve the stability of transformer fault detection, the method herein employs a cross entropy loss function to constrain the model during training.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description of the preferred embodiments of the present invention is not intended to limit the invention to the precise form disclosed, and any modifications, equivalents, and alternatives falling within the spirit and principles of the present invention are intended to be included within the scope of the present invention.
Claims (10)
1. The transformer fault detection method based on similarity and contrast learning is characterized by comprising the following steps of:
calculating the similarity of soluble data in the transformer fault sample oil by adopting cosine similarity, and determining the similarity among the transformer fault samples;
based on the similarity value, connecting each transformer fault sample with K most similar transformer fault samples, and constructing a similar KNN diagram; simultaneously connecting each transformer fault sample with K least similar transformer fault samples to construct a dissimilar KNN diagram;
learning a transformer fault node representation of a similar KNN diagram by constructing a low-pass filter for transformer fault detection;
by constructing a high-pass filter for transformer fault detection, learning transformer fault node representations of dissimilar KNN graphs;
the method comprises the steps of obtaining differences among fault categories of a transformer by using fault node representation of the transformer, realizing inconsistent modes among different categories of faults of the transformer by using a sequencing comparison learning method, and optimizing the fault node representation of the transformer;
and representing the optimized transformer fault node, determining the transformer fault type by using a multi-layer perception classifier, and taking the fault type with the maximum probability value as the sample fault type.
2. The transformer fault detection method based on similarity and contrast learning according to claim 1, wherein the step of calculating the similarity of the soluble data in the transformer fault sample oil by using cosine similarity comprises the following steps:
wherein sm is i,j A cosine similarity value representing an i-th transformer fault node and a j-th transformer fault node,the characteristic vector of the fault node of the ith transformer and the characteristic vector of the fault node of the jth transformer are respectively represented, and d represents the characteristic dimension of the fault node.
3. The transformer fault detection method based on similarity and contrast learning according to claim 2, wherein the constructing step of the similar KNN graph and the dissimilar KNN graph comprises:
sorting based on similarity values among fault samples of the transformer from large to small;
selecting the first K sequenced samples of each transformer fault sample as neighbor construction similarity KNN graphs;
each transformer fault sample is connected with the last K samples after sequencing to form a dissimilar KNN diagram.
4. The transformer fault detection method based on similarity and contrast learning according to claim 1, wherein the transformer fault node expression formula for learning a similar KNN graph is as follows:
wherein,is a symmetrical normalized adjacency matrix of a similar diagram in transformer detection, L epsilon {1 … L } is the number of layers of a neural network, MLP represents a multi-layer perceptron, and the representation of the last layer is->Representation matrix H, considered as transformer sample nodes in a similarity graph (hm) 。
5. The transformer fault detection method based on similarity and contrast learning according to claim 1, wherein the transformer fault node expression formula of the learning dissimilar KNN graph is as follows:
wherein, L is {1 … L },is a symmetric normalized Laplacian matrix of dissimilarity map in transformer detection, alpha is a super parameter of a high-pass filter for controlling transformer fault detection, and the representation of the last layer is +.>Representation matrix H, considered as transformer sample nodes in dissimilar diagrams (ht) 。
6. The method for detecting a transformer fault based on similarity and contrast learning of claim 1, wherein the step of optimizing the representation of the transformer fault node comprises:
calculating the similarity degree among fault categories of the transformer through cosine similarity:
wherein s is i,j The similarity degree of the ith fault and the jth fault is shown, i, j respectively shows the ith fault and the jth fault, n i Indicating the number of samples contained in the i-th fault, n j Indicating the number of samples contained in the j-th class of fault,feature matrix representing all transformer fault nodes of fault class i +.>The characteristic matrix of all the transformer fault nodes with the fault class of j is represented, and d represents the characteristic dimension of the transformer fault nodes;
the similarity degree of each fault category and other fault categories is ordered from large to small, and a positive sample set P of orderly transformer fault nodes is constructed 1 ,…,P r And satisfies:
wherein s (·) represents a cosine formula, h represents a target node, h i Representing a node of a certain research in the positive sample set, h n For a certain sample node in the negative sample set, P i Represents the i-th positive sample set, N represents the negative sample set;
the InfoNCE penalty is computed recursively until there is no positive sample set, formalized defined asWherein l i The calculation mode of (2) is as follows:
wherein τ i Represents temperature, and τ i <τ i+1 Punishment of faults with high similarity, h p For positive sample set node representation, h n For node representation in negative sample set, P i Represents the i-th positive sample set, N represents the negative sample set;
the objective function of the optimized transformer fault detection module is as follows:
wherein y is i Real label for representing fault node of transformerThe label is used for the purpose of providing a label,the predicted labels are represented for the transformer fault nodes according to the learning.
7. The method for detecting a transformer fault based on similarity and contrast learning according to claim 1, wherein the determining the type of the transformer fault using the multi-layer perceptual classifier is formally defined as:
wherein,representing the final predicted fault class of input sample i, h i Representing a representation of the input transformer sample i learned by the model.
8. A transformer fault detection system based on similarity and contrast learning, comprising:
and an extraction module: means configured to extract a characteristic of dissolved gas in transformer oil;
a graph network module: the method comprises the steps of configuring the transformer fault data to calculate the relation between transformer fault data based on cosine similarity and constructing similar and dissimilar KNN diagrams;
a multi-view fusion feature module: the method comprises the steps of merging node representations in a low-pass filter learning similar KNN diagram and node representations in a high-pass filter learning dissimilar KNN diagram as final node representations;
optimizing a transformer sample feature module: the method comprises the steps of configuring to realize learning inconsistent modes among different types of faults of the transformer by using a sequencing-comparison learning method, and optimizing transformer fault node representation;
a transformer fault classification module: configured to determine a transformer fault category using a multi-layer perceptual classifier;
and a display module: the voice broadcasting and display screen is configured.
9. A transformer fault detection device based on similarity and contrast learning is characterized in that,
the transformer fault detection device is in communication connection with a transformer fault detection system based on similarity and contrast learning according to claim 8.
10. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311139995.5A CN117290767A (en) | 2023-09-05 | 2023-09-05 | Transformer fault detection method system and equipment based on similarity and contrast learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311139995.5A CN117290767A (en) | 2023-09-05 | 2023-09-05 | Transformer fault detection method system and equipment based on similarity and contrast learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117290767A true CN117290767A (en) | 2023-12-26 |
Family
ID=89256291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311139995.5A Pending CN117290767A (en) | 2023-09-05 | 2023-09-05 | Transformer fault detection method system and equipment based on similarity and contrast learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117290767A (en) |
-
2023
- 2023-09-05 CN CN202311139995.5A patent/CN117290767A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110415215B (en) | Intelligent detection method based on graph neural network | |
CN103914735B (en) | A kind of fault recognition method and system based on Neural Network Self-learning | |
CN110703057B (en) | Power equipment partial discharge diagnosis method based on data enhancement and neural network | |
Johansson et al. | Detection of vessel anomalies-a Bayesian network approach | |
CN111914883A (en) | Spindle bearing state evaluation method and device based on deep fusion network | |
CN113505655A (en) | Bearing fault intelligent diagnosis method for digital twin system | |
CN109389325B (en) | Method for evaluating state of electronic transformer of transformer substation based on wavelet neural network | |
CN107490758B (en) | Modularization BP neural network circuit failure diagnosis method based on fault propagation | |
CN112613494B (en) | Power line monitoring abnormality identification method and system based on deep countermeasure network | |
CN117034143B (en) | Distributed system fault diagnosis method and device based on machine learning | |
CN115757103A (en) | Neural network test case generation method based on tree structure | |
Ndjakomo Essiane et al. | Faults detection and identification in PV array using kernel principal components analysis | |
CN114996110A (en) | Deep inspection optimization method and system based on micro-service architecture | |
CN115356599B (en) | Multi-mode urban power grid fault diagnosis method and system | |
CN111783941A (en) | Mechanical equipment diagnosis and classification method based on probability confidence degree convolutional neural network | |
CN116028893A (en) | Intelligent data monitoring method, system and electronic equipment | |
CN116658433A (en) | Intelligent detection method and system for particle-resistant magnetic pump | |
CN117290767A (en) | Transformer fault detection method system and equipment based on similarity and contrast learning | |
CN115456013A (en) | Wind turbine generator converter power module fault diagnosis method based on operation data | |
CN116244600A (en) | Method, system and equipment for constructing GIS intermittent discharge mode identification model | |
CN113469977B (en) | Flaw detection device, method and storage medium based on distillation learning mechanism | |
CN109886292B (en) | Abnormal reason diagnosis method based on abnormal association graph | |
CN111881040A (en) | Test data generation method of abstract state model based on recurrent neural network | |
Li et al. | Intelligent diagnosis and recognition method of GIS partial discharge data map based on deep learning | |
CN115556099B (en) | Sustainable learning industrial robot fault diagnosis system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |