Nothing Special   »   [go: up one dir, main page]

CN112633315A - Electric power system disturbance classification method - Google Patents

Electric power system disturbance classification method Download PDF

Info

Publication number
CN112633315A
CN112633315A CN202011132259.3A CN202011132259A CN112633315A CN 112633315 A CN112633315 A CN 112633315A CN 202011132259 A CN202011132259 A CN 202011132259A CN 112633315 A CN112633315 A CN 112633315A
Authority
CN
China
Prior art keywords
disturbance
data
power system
dae
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011132259.3A
Other languages
Chinese (zh)
Inventor
刘有志
蒋雨辰
张扬
李子康
刘灏
毕天姝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
North China Electric Power University
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University, Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical North China Electric Power University
Priority to CN202011132259.3A priority Critical patent/CN112633315A/en
Publication of CN112633315A publication Critical patent/CN112633315A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention provides a power system disturbance classification method, a feature extraction method based on a stacking denoising autoencoder can capture feature expression robust to lost data in disturbance data, and recognition of power system disturbance is realized by using a random forest classifier on the basis. The method can quickly and accurately classify the PMU disturbance data, has high identification accuracy for the PMU disturbance data containing the lost data, and has good anti-noise performance. Compared with the existing disturbance classification method, the method can quickly and accurately classify the PMU disturbance data containing the lost data, and realize the real-time monitoring of the dynamic behavior of the power system.

Description

Electric power system disturbance classification method
Technical Field
The invention relates to the technical field of power systems, in particular to a power system disturbance classification method, and particularly relates to a power system disturbance classification method considering PMU (phasor measurement Unit) lost data.
Background
With the continuous expansion of the scale of the power system and the access of a large number of power electronic devices, the complexity of the power grid structure is increased continuously, and the problem of power grid safety is highlighted gradually. In recent years, major power failure accidents frequently occur, and great influence is brought to the development of social economy and life of people. Researches find that a blackout accident usually starts from a single fault and finally causes the breakdown of a power grid through a series of chain reactions. Therefore, real-time monitoring and analysis of the disturbance of the power system play an important role in safe and stable operation of the power system. Synchronous Phasor Measurement Units (PMUs) can provide a data basis for system protection and closed-loop control due to the synchronism, rapidity and accuracy of the PMUs, so that real-time monitoring of power system disturbance becomes possible.
At present, the study of the disturbance classification of the power system by scholars at home and abroad is mainly divided into model-based and data-based methods. The model-based method needs to model the power grid through a system topological structure and parameters, and realizes disturbance type identification according to a disturbance triggering mechanism. However, for complex systems, the computational effort is large and may not even be resolvable. The data-based method analyzes historical data to obtain a nonlinear mapping relation between the data and a target, so as to realize disturbance type identification. With the increase of system complexity and the influx of massive power big data, the data-based method gradually becomes a more effective analysis method.
Most of the existing methods are researched on the assumption that PMU data is normal, and the influence of PMU data quality is ignored. However, about 10% to 17% of PMU data has different degrees of data quality problems, which severely restricts its application in power system disturbance classification.
Object of the Invention
The invention aims to provide a power system disturbance classification method based on a stack denoising autoencoder and a random forest classifier, aiming at the defects in the prior art.
Disclosure of Invention
The invention provides a power system disturbance classification method based on a stack denoising autoencoder and a random forest classifier, which comprises the following steps:
step 1: generating disturbance data of the power system by using an offline time domain simulation method;
step 2: carrying out standardization processing on the disturbance data obtained by the off-line simulation method in the step 1;
and step 3: constructing and training a deep neural network of the stacked denoising autoencoder, and training the stacked denoising autoencoder by taking the effective values of frequency and voltage within 0.5s after disturbance occurs as the input of the stacked denoising autoencoder;
and 4, step 4: extracting data features by using the stack denoising self-encoder trained in the step 3 to obtain high-level feature expression;
and 5: and (4) constructing and training a random forest classifier, and classifying the high-level features extracted in the step (4) through the trained random forest classifier to realize disturbance identification.
Further, the process of generating the disturbance data by using the offline time domain simulation method in the step 1 specifically includes: respectively selecting 6 disturbance types including three-phase short circuit 3-phi Flt, single-phase earth fault phi-g Flt, generator output reduction GL, load input, load shedding and three-phase line breaking LT for simulation, wherein the system is an IEEE 10 machine 39 system, simulation software is PSD-BPA, the simulation time is 30s, the simulation step length is set to be 0.02s, the disturbance is triggered after 5s, and the frequency and voltage effective values of each bus are output.
Still further, the process of performing the normalization process in step 2 is as follows: on the assumption of PMUThe frequency is 50Hz, the frequency and voltage within 0.5s are respectively denoted as f,
Figure RE-RE-GDA0002960891160000031
the frequency and voltage signals are normalized separately:
Figure RE-RE-GDA0002960891160000032
wherein,
Figure RE-RE-GDA0002960891160000033
for normalized data, u and σ are the mean and standard deviation, respectively, of the Z corresponding variable.
Still further, the stacked denoising autoencoder SDAE in step 3 is a deep network model stacked by denoising autoencoders DAE, and the process of constructing and training the stacked denoising autoencoder deep neural network includes the following sub-steps:
s31: order to
Figure RE-RE-GDA0002960891160000034
For DAE input data, first, with a certain probability C pairs
Figure RE-RE-GDA0002960891160000035
To obtain the damaged disturbance data
Figure RE-RE-GDA00029608911600000311
The DAE then maps the corrupted data to a signature expression h ═ h of the hidden layer by an encoding operation1,h2,…,ht]TAnd then reconstruct the complete samples by decoding
Figure RE-RE-GDA0002960891160000036
The DAE encoding and decoding process is as follows:
Figure RE-RE-GDA0002960891160000037
Figure RE-RE-GDA0002960891160000038
wherein, W and W' are respectively an encoding matrix and a decoding matrix; b and b' are the coding offset vector and the offset bias vector, respectively; θ and θ' are parameters for encoding and decoding, respectively; f. ofθAnd gθ' is an activation function, here the Sigmoid function is used:
Figure RE-RE-GDA0002960891160000039
s32: training the SDEA, and in the process, adjusting parameters by taking the minimum reconstruction error as a target:
Figure RE-RE-GDA00029608911600000310
wherein
Figure RE-RE-GDA0002960891160000041
Is the error of the reconstruction and is,
Figure RE-RE-GDA0002960891160000042
means that
Figure RE-RE-GDA0002960891160000043
The minimum corresponding parameters θ and θ'; for preprocessed perturbed data sets
Figure RE-RE-GDA0002960891160000044
N is the number of data, the reconstruction error
Figure RE-RE-GDA0002960891160000045
Expressed as:
Figure RE-RE-GDA0002960891160000046
wherein,
Figure RE-RE-GDA0002960891160000047
for the ith preprocessed perturbation data,
Figure RE-RE-GDA0002960891160000048
the MSE is mean square error;
obtaining the optimal model parameters through the back propagation of errors and a gradient descent algorithm, wherein the parameter updating process comprises the following steps:
Figure RE-RE-GDA0002960891160000049
wherein eta is the learning rate;
in the training process, the SDAE optimizes the model through self-supervision learning, specifically, any two adjacent layers in the SDAE are regarded as one DAE, and the neural network is trained layer by layer with the aim of minimizing reconstruction errors.
Preferably, in step 4, the SDAE is propagated forward with the coding features of the previous DAE as input data for the next DAE.
Still further, the step 5 further comprises:
the random forest classifier is an integrated algorithm classifier taking a plurality of decision trees DT as weak classifiers, wherein a single DT is a classification regression tree CART, and for a given sample set D, the Gini coefficient is
Figure RE-RE-GDA00029608911600000410
Wherein, | CkIs DFThe number of the sample subsets belonging to the kth class, N is the number of samples, and K is the number of classes; gini: (DF) Probability of a randomly selected sample in the representation being misclassified, Gini (D)F) The smaller the size, the smaller the value of DFThe lower the probability that the selected sample is misclassified, i.e. DFThe higher the purity of (a);
sample feature set DFAccording to the feature F ═ F1,F2,…FkIn feature FjIs divided into D1And D2Two parts are as follows:
D1={(x,y)∈DF|A(x)=a},D2=DF-D1
then under the condition of feature a, the kini coefficient of set D is:
Figure RE-RE-GDA0002960891160000051
in the formula, | D1I and I D2Respectively representing the sets D1And D2The number of samples in (1); n is the number of samples;
Gini(DF,Fja) represents the set DFAnd FjUncertainty after a division; the larger the value of the kini index is, the larger the uncertainty of the sample set is;
generating n sub-data sets by using a bootstrap sampling method, and generating the n sub-data sets and corresponding n CARTs by using a Gini index as a segmentation criterion, thereby constructing a random forest classifier;
and training a random forest classifier by using the high-grade features extracted by the SDAE to realize disturbance identification and classification.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a flowchart of a power system disturbance classification method based on a stacked denoising autoencoder and a random forest classifier according to an embodiment of the present invention;
FIG. 2 is a block diagram of a denoised self-encoder according to an embodiment of the invention;
FIG. 3 is a block diagram of a stacked denoising self-encoder according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating corresponding reconstruction errors for different stacked denoising autocoders according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating classification accuracy for different decision tree depths and counts according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of two-dimensional visualization of features extracted at different depths of a denoised self-encoder neural network according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a disturbance identification confusion matrix of the disturbance classification method according to the embodiment of the invention;
FIG. 8 is a comparison of the recognition accuracy of the perturbation classification method according to the embodiment of the present invention and other methods under different data loss levels.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it should be understood by those skilled in the art that the described embodiments are only used for illustrating the spirit and idea of the present invention and should not be construed as being limited by the scope of the present invention. All other variations and combinations of technical solutions which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention fall within the protection scope of the present invention.
As shown in fig. 1, the method is a flowchart of a method for classifying disturbance of an electric power system based on a stack denoising autoencoder and a random forest classifier according to an embodiment of the present invention, and the method mainly includes the following steps:
step 1: generating disturbance data by using an offline time domain simulation method;
step 2: standardizing the off-line simulation data by utilizing standardization;
and step 3: constructing and training a stacked denoising self-encoder deep neural network. And training the effective values of the frequency and the voltage within 0.5s after the disturbance occurs as the input of the stacked denoising self-encoder.
And 4, step 4: and extracting the data features by utilizing the trained stacking denoising self-encoder to obtain high-level feature expression.
And 5: and constructing and training a random forest classifier, and classifying the extracted features through the trained random forest classifier to realize disturbance identification.
In step 1, the process of generating the disturbance data by using the offline time domain simulation method specifically includes:
according to the probability of disturbance occurrence and damage caused in the power system, 6 disturbance types of three-phase short circuit (3-phi Flt), single-phase earth fault (phi-g Flt), generator output reduction (GL), load switching on/off (L-on/off) and three-phase line breaking (LT) are selected for simulation. The system is an IEEE 10 machine 39 system, the simulation software is PSD-BPA, the simulation time is 30s, and the disturbance is triggered after 5s, so that the frequency and voltage effective value of each bus are output. The simulation step size is set to 0.02s, taking into account that the PMU upload rate is 50 Hz. To simulate the actual operating conditions of the power system, 60dB of white Gaussian noise was applied in the simulation.
TABLE 1 simulation method of different disturbances
Figure RE-RE-GDA0002960891160000071
In step 2, the process of standardizing the data of the off-line simulation by using standardization specifically includes: assuming a PMU up-conversion frequency of 50Hz, the frequency and voltage within 0.5s can be respectively denoted as f,
Figure RE-RE-GDA0002960891160000072
for the frequency and voltage signals, they are normalized separately:
Figure RE-RE-GDA0002960891160000081
wherein,
Figure RE-RE-GDA0002960891160000082
for normalized data, u and σ are the mean and standard deviation, respectively, of the Z corresponding variable.
Before describing the specific process of step 3, a relevant description is first made for the stacked denoising self-encoder algorithm.
In step 3, the process of constructing and training the stacked denoising self-encoder deep neural network specifically comprises the following steps:
the network structure of a Denoise Autoencoder (DAE) is shown in fig. 2. Order to
Figure RE-RE-GDA0002960891160000083
For DAE input data, first pair with a certain probability C
Figure RE-RE-GDA0002960891160000084
To obtain the damaged disturbance data
Figure RE-RE-GDA0002960891160000085
The DAE then maps the corrupted data to a hidden layer's representation of features h ═ h through an encoding operation1,h2,…,ht]TAnd then reconstruct the complete samples by decoding
Figure RE-RE-GDA0002960891160000086
The encoding and decoding processes are as follows.
Figure RE-RE-GDA0002960891160000087
Figure RE-RE-GDA0002960891160000088
Wherein, W and W' are respectively an encoding matrix and a decoding matrix; b and b' are the coding offset vector and the offset bias vector, respectively; θ and θ' are parameters for encoding and decoding, respectively. f. ofθAnd gθ'Is an activation function, here the Sigmoid function is used:
Figure RE-RE-GDA0002960891160000089
during the training process, the DAE adjusts the parameters with the aim of minimizing the reconstruction error:
Figure RE-RE-GDA00029608911600000810
wherein
Figure RE-RE-GDA00029608911600000811
Is the reconstruction error.
Figure RE-RE-GDA00029608911600000812
Means that
Figure RE-RE-GDA00029608911600000813
The minimum corresponding parameters theta and theta'. If the reconstruction error is small enough, it indicates that the hidden layer contains significant features that can characterize the original data.
For preprocessed perturbed data sets
Figure RE-RE-GDA0002960891160000091
And N is the number of data. The reconstruction error of the data set
Figure RE-RE-GDA0002960891160000092
Is represented as follows:
Figure RE-RE-GDA0002960891160000093
wherein,
Figure RE-RE-GDA0002960891160000094
for the ith preprocessed perturbation data,
Figure RE-RE-GDA0002960891160000095
MSE is mean square error.
The optimal model parameters can be obtained through the back propagation of errors and a gradient descent algorithm. The parameter updating process is as follows:
Figure RE-RE-GDA0002960891160000096
wherein η is the learning rate.
A Stacked denoising auto-encoder (SDAE) is a deep network model Stacked by DAE, as shown in fig. 3. SDAE optimizes the model by self-supervised learning. First, any two adjacent layers in the SDAE are treated as one DAE, and the network is trained layer by layer with the goal of minimizing reconstruction errors. The feature extraction method based on SDAE is shown in an algorithm I:
Figure RE-RE-GDA0002960891160000097
Figure RE-RE-GDA0002960891160000101
in step 4, the specific process of extracting the data features by using the trained SDAE is as follows:
SDAE is forward propagated with the coding features of the previous DAE as input data for the next DAE. Therefore, for an SDAE with an implied layer number L, the feature extraction process is as follows:
Figure RE-RE-GDA0002960891160000102
Figure RE-RE-GDA0002960891160000111
in step 5, the specific process of constructing and training the random forest classifier is as follows:
random Forest (RF) is an integrated algorithm with multiple decision trees as weak classifiers. The final classification result is achieved by majority voting of multiple Decision Trees (DTs). The single DT selected in this study is a classification regression tree (CART) which uses the kini index as a criterion for selecting segmentation features, and the correlation formula is as follows:
for a given sample set D, the coefficient of kini is
Figure RE-RE-GDA0002960891160000112
Wherein, | CkIs DFThe number of the sample subsets belonging to the kth class, N is the number of samples, and K is the number of classes. Gini (D)F) Probability of a randomly selected sample in the representation being misclassified, Gini (D)F) The smaller the size, the smaller the value of DFThe lower the probability that the selected sample is misclassified, i.e. DFThe higher the purity of (c).
Sample feature set DFAccording to the feature F ═ F1,F2,…FkIn feature FjIs divided into D1And D2Two parts are as follows:
D1={(x,y)∈DF|A(x)=a},D2=DF-D1
then under the condition of feature a, the kini coefficient of set D is:
Figure RE-RE-GDA0002960891160000121
in the formula, | D1I and|D2respectively representing the sets D1And D2The number of samples in (1); n is the number of samples.
Gini(DF,FjA) represents the set DFAnd FjThe uncertainty after a division. The larger the value of the kini index, the greater the uncertainty of the sample set. Therefore, the feature with the minimum Gini index and the corresponding feature and feature value are selected as the optimal segmentation feature and segmentation point.
We generate n sub-datasets using bootstrap sampling method and generate n sub-datasets and corresponding n CART with the kini index as the segmentation criterion to construct the random forest classifier.
Random forests are used as classifiers to avoid the problem of low generalization performance of individual classifiers. Finally, disturbance identification is achieved by training the RF classifier with the high-level features extracted by SDAE. An event classification algorithm based on random forests is summarized in algorithm III, which describes the generation and class classification of RF.
Figure RE-RE-GDA0002960891160000122
Figure RE-RE-GDA0002960891160000131
Experiments also verify the effects of the above scheme of the embodiment of the present invention.
1. SDAE model structure and parameter setting
And taking the preprocessed data as SDAE input data, and setting the number of hidden layer neurons layer by layer. We first determine the optimal neuron number for the first layer and then fix the number of neurons in the first layer to determine the optimal neuron number for the second layer. This process continues until the reconstruction error MSE no longer decreases. Fig. 4 shows the effect of hidden layer number and hidden layer neuron number on MSE. For an IEEE-39 node system, when the data loss level is 50%, the SDAE identifies that the optimal number of hidden layers for power system disturbances is 4 layers, and the optimal number of hidden layer units per layer is 50,70,50, and 30.
And training the disturbance data with different loss levels by adopting the same parameter optimization method, and calculating the reconstruction error of the test set. The optimal network structure and test results at different data loss levels are shown in table 2, and when the loss level is greater than 50%, the reconstruction error increases rapidly, which indicates that the maximum loss level of the model capable of reconstructing the original data is 50%. Thus, for an IEEE-39 node system, the optimal number of hidden layers identified based on the perturbation of the SDAE neural network is 4 layers, and the optimal number of neurons per layer is [50,70,50,30 ]. The maximum data loss tolerance is 50%.
TABLE 2 optimal SDAE structure at different data loss levels
Figure RE-RE-GDA0002960891160000141
2. RF parameter setting
The optimal parameters for RF are selected based on the features extracted by SDAE on the perturbation data. First the optimal depth d of the DT is determined, at which the number n of optimal DTs is determined. The optimal depths d and n are determined by comparing the average accuracy of 10 cross-validations on the validation set.
As shown in fig. 5, the classifier has the best performance when d is 6 and n is 40.
3. Feature extraction testing
To evaluate the performance of the method, raw data and SDAE extracted features were mapped into a two-dimensional space and the extracted features were visualized with a data loss rate of 50%. The visualization results are shown in fig. 6, where 1, 2, 3, 4, 5 and 6 correspond to three-phase short-circuit fault, single-phase earth fault, reduced engine output, load shedding and three-phase wire breakage, respectively.
As shown in fig. 6(a), there is too much overlap of samples for each class in the original data space to separate the classes. However, when using features extracted by SDAE, the categories are well separated. Fig. 6(b) - (d) show feature spaces for extracted features for different numbers of hidden layers in the SDAE. As the number of layers increases, the degree of overlap between the different classes decreases.
4. Testing at different levels of data loss
We assume that there is a data loss in the PMU data used for disturbance identification. Fig. 7 shows the confusion matrix of the proposed method, where the data loss level is set to 50%. The confusion matrix is a standard format for describing the accuracy of classification. It is a square matrix with rank K, where K is the number of classes. The rows of the confusion matrix are the disturbance classes obtained by the algorithm, and the columns are the actual disturbance classes. The values on the diagonal represent the probability of correctly identifying the class. The values on both sides of the diagonal indicate the probability that the jth class of disturbance was misidentified as an ith class of disturbance.
In the case shown in fig. 7, where the accuracy of the identification of the generator contribution reducing event is low. This is because the dynamic characteristics of the load shedding event and the generator derating event are similar, resulting in their misclassification. The overall accuracy of the test data is 98.73%, which shows that the method has good performance for identifying the disturbance of the power system with missing data.
In addition, under different data loss levels, different feature extraction methods and classifiers are combined to highlight the superiority of the method. Selected feature extraction methods include time-domain artificial features (MFT), frequency-domain artificial features (MFF), Stacked Automatic Encoder (SAE), and SDAE. Classifiers include Softmax, Extreme Learning Machine (ELM), Linear Support Vector Machine (LSVM), Gaussian Support Vector Machine (GSVM), DT, and RF. The results are shown in FIG. 8.
The following results can be obtained from the figure:
1) at all levels of data loss, the precision and Micro-F1 of all classifiers in four different feature spaces are on the rise, indicating that the deep neural network-based feature extraction capability is superior to the traditional methods (MFT and MFF).
2) With the increase of the data loss degree, the difference of the SDAE method and the traditional method in precision is larger and larger, which shows that the traditional method is very sensitive to the missing data, but the method has strong robustness to the missing data.
3) Under the same data loss level and the same feature space, the recognition rate of the integrated learning RF classifier can reach 98.73 percent at most.
5. Calculating a time estimate
Simulation experiments were performed on a computer with a CPU of i7-8700k (3.7GHz), GTX1080ti GPU and 16G memory. The calculation process includes feature extraction and classification. Table 2 shows the average calculated time for each sample in the test set. The calculation time is 3.507ms, the feature extraction time is 3.482ms, and the classification time is 0.025ms, and the result shows that the method is low in calculation complexity and high in real-time performance.
TABLE 2 calculation time of the proposed method
Figure RE-RE-GDA0002960891160000161
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A power system disturbance classification method based on a stack denoising autoencoder and a random forest classifier is characterized by comprising the following steps:
step 1: generating disturbance data of the power system by using an offline time domain simulation method;
step 2: carrying out standardization processing on the disturbance data obtained by the off-line simulation method in the step 1;
and step 3: constructing and training a deep neural network of the stacked denoising autoencoder, and training the stacked denoising autoencoder by taking the effective values of frequency and voltage within 0.5s after disturbance occurs as the input of the stacked denoising autoencoder;
and 4, step 4: extracting data features by using the stack denoising self-encoder trained in the step 3 to obtain high-level feature expression;
and 5: and (4) constructing and training a random forest classifier, and classifying the high-level features extracted in the step (4) through the trained random forest classifier to realize disturbance identification.
2. The method for classifying the disturbance of the power system according to claim 1, wherein the process of generating the disturbance data by using the offline time domain simulation method in the step 1 specifically comprises: respectively selecting 6 disturbance types including three-phase short circuit 3-phi Flt, single-phase earth fault phi-g Flt, generator output reduction GL, load input, load shedding and three-phase line breaking LT for simulation, wherein the system is an IEEE 10 machine 39 system, simulation software is PSD-BPA, the simulation time is 30s, the simulation step length is set to be 0.02s, the disturbance is triggered after 5s, and the frequency and voltage effective values of each bus are output.
3. The method for classifying disturbance of an electric power system according to claim 1, wherein the step 2 of performing the normalization process comprises: assuming a PMU up-conversion frequency of 50Hz, the frequency and voltage within 0.5s are respectively denoted as f,
Figure RE-FDA0002960891150000011
the frequency and voltage signals are normalized separately:
Figure RE-FDA0002960891150000021
wherein,
Figure RE-FDA0002960891150000022
for normalized data, u and σ are the mean and standard deviation, respectively, of the Z corresponding variable.
4. The method for classifying disturbance of electric power system as claimed in claim 1, wherein the stacked denoising autoencoder SDAE in step 3 is a deep network model stacked by denoising autoencoders DAE, and the process of constructing and training the stacked denoising autoencoder deep neural network includes the following sub-steps:
s31: order to
Figure RE-FDA0002960891150000023
For DAE input data, first, with a certain probability C pairs
Figure RE-FDA0002960891150000024
To obtain the damaged disturbance data
Figure RE-FDA0002960891150000025
The DAE then maps the corrupted data to a signature expression h ═ h of the hidden layer by an encoding operation1,h2,…,ht]TAnd then reconstruct the complete samples by decoding
Figure RE-FDA0002960891150000026
The DAE encoding and decoding process is as follows:
Figure RE-FDA0002960891150000027
θ={W,b}
Figure RE-FDA0002960891150000028
θ'={W',b'}
wherein, W and W' are respectively an encoding matrix and a decoding matrix; b and b' are the coding offset vector and the offset bias vector, respectively; θ and θ' are parameters for encoding and decoding, respectively; f. ofθAnd gθ'Is an activation function, here the Sigmoid function is used:
Figure RE-FDA0002960891150000029
s32: training the SDEA, and in the process, adjusting parameters by taking the minimum reconstruction error as a target:
Figure RE-FDA00029608911500000210
wherein
Figure RE-FDA0002960891150000031
Is the error of the reconstruction and is,
Figure RE-FDA0002960891150000032
means that
Figure RE-FDA0002960891150000033
The minimum corresponding parameters θ and θ'; for preprocessed perturbed data sets
Figure RE-FDA0002960891150000034
N is the number of data, the reconstruction error
Figure RE-FDA0002960891150000035
Expressed as:
Figure RE-FDA0002960891150000036
wherein,
Figure RE-FDA0002960891150000037
for the ith preprocessed perturbation data,
Figure RE-FDA0002960891150000038
is composed of
Figure RE-FDA0002960891150000039
The MSE is mean square error;
obtaining the optimal model parameters through the back propagation of errors and a gradient descent algorithm, wherein the parameter updating process comprises the following steps:
Figure RE-FDA00029608911500000310
wherein eta is the learning rate;
in the training process, the SDAE optimizes the model through self-supervision learning, specifically, any two adjacent layers in the SDAE are regarded as one DAE, and the neural network is trained layer by layer with the aim of minimizing reconstruction errors.
5. The method of claim 4, wherein in step 4, the SDAE is forward propagated by using the coding features of the previous DAE as input data of the next DAE.
6. The method according to claim 5, wherein the step 5 further comprises:
the random forest classifier is an integrated algorithm classifier taking a plurality of decision trees DT as weak classifiers, wherein a single DT is a classification regression tree CART, and for a given sample set D, the Gini coefficient is
Figure RE-FDA0002960891150000041
Wherein, | CkIs DFThe number of the sample subsets belonging to the kth class, N is the number of samples, and K is the number of classes; gini (D)F) Probability of a randomly selected sample in the representation being misclassified, Gini (D)F) The smaller the size, the smaller the value of DFThe lower the probability that the selected sample is misclassified, i.e. DFThe higher the purity of (a);
sample feature set DFAccording to the feature F ═ F1,F2,…FkIn feature FjIs divided into D1And D2Two parts are as follows:
D1={(x,y)∈DF|A(x)=a},D2=DF-D1
then under the condition of feature a, the kini coefficient of set D is:
Figure RE-FDA0002960891150000042
in the formula, | D1I and I D2Respectively representing the sets D1And D2The number of samples in (1); n is the number of samples;
Gini(DF,Fja) represents the set DFAnd FjUncertainty after a division; the larger the value of the kini index is, the larger the uncertainty of the sample set is;
generating n sub-data sets by using a bootstrap sampling method, and generating the n sub-data sets and corresponding n CARTs by using a Gini index as a segmentation criterion, thereby constructing a random forest classifier;
and training a random forest classifier by using the high-grade features extracted by the SDAE to realize disturbance identification and classification.
CN202011132259.3A 2020-10-21 2020-10-21 Electric power system disturbance classification method Pending CN112633315A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011132259.3A CN112633315A (en) 2020-10-21 2020-10-21 Electric power system disturbance classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011132259.3A CN112633315A (en) 2020-10-21 2020-10-21 Electric power system disturbance classification method

Publications (1)

Publication Number Publication Date
CN112633315A true CN112633315A (en) 2021-04-09

Family

ID=75302875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011132259.3A Pending CN112633315A (en) 2020-10-21 2020-10-21 Electric power system disturbance classification method

Country Status (1)

Country Link
CN (1) CN112633315A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113316185A (en) * 2021-05-26 2021-08-27 山东建筑大学 LTE network uplink interference category identification method and system based on classifier
CN113364540A (en) * 2021-06-07 2021-09-07 山东建筑大学 LTE uplink interference classification method and system based on stack noise reduction self-coding
CN116805039A (en) * 2023-08-21 2023-09-26 腾讯科技(深圳)有限公司 Feature screening method, device, computer equipment and data disturbance method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846410A (en) * 2018-05-02 2018-11-20 湘潭大学 Power Quality Disturbance Classification Method based on sparse autocoding deep neural network
CN110009529A (en) * 2019-04-15 2019-07-12 湖南大学 A kind of transient frequency acquisition methods based on storehouse noise reduction autocoder

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846410A (en) * 2018-05-02 2018-11-20 湘潭大学 Power Quality Disturbance Classification Method based on sparse autocoding deep neural network
CN110009529A (en) * 2019-04-15 2019-07-12 湖南大学 A kind of transient frequency acquisition methods based on storehouse noise reduction autocoder

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PASCAL VINCENT 等: "Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion", JOURNAL OF MACHINE LEARNING RESEARCH, 12 October 2010 (2010-10-12), pages 3371 - 3408, XP055577183 *
瞿合祚 等: "基于多标签随机森林的电能质量复合扰动分类方法", 电力系统保护与控制, vol. 45, no. 11, 1 June 2017 (2017-06-01), pages 1 - 7 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113316185A (en) * 2021-05-26 2021-08-27 山东建筑大学 LTE network uplink interference category identification method and system based on classifier
CN113316185B (en) * 2021-05-26 2023-09-12 山东建筑大学 LTE network uplink interference category identification method and system based on classifier
CN113364540A (en) * 2021-06-07 2021-09-07 山东建筑大学 LTE uplink interference classification method and system based on stack noise reduction self-coding
CN113364540B (en) * 2021-06-07 2022-05-17 山东建筑大学 LTE uplink interference classification method and system based on stack noise reduction self-coding
CN116805039A (en) * 2023-08-21 2023-09-26 腾讯科技(深圳)有限公司 Feature screening method, device, computer equipment and data disturbance method
CN116805039B (en) * 2023-08-21 2023-12-05 腾讯科技(深圳)有限公司 Feature screening method, device, computer equipment and data disturbance method

Similar Documents

Publication Publication Date Title
CN110879917A (en) Electric power system transient stability self-adaptive evaluation method based on transfer learning
Liao et al. Electricity theft detection using Euclidean and graph convolutional neural networks
CN112633315A (en) Electric power system disturbance classification method
Mao et al. Anomaly detection for power consumption data based on isolated forest
CN110718910A (en) Transient stability evaluation method for Bayesian optimization LightGBM
CN107561997A (en) A kind of power equipment state monitoring method based on big data decision tree
Zhu et al. Networked time series shapelet learning for power system transient stability assessment
CN112069727B (en) Intelligent transient stability evaluation system and method with high reliability for power system
CN112200694A (en) Dominant instability mode identification model construction and application method based on graph neural network
CN115563563A (en) Fault diagnosis method and device based on transformer oil chromatographic analysis
CN116400168A (en) Power grid fault diagnosis method and system based on depth feature clustering
CN113435492A (en) Power system dominant instability mode discrimination method based on active learning
CN112485597A (en) Power system transmission line fault diagnosis method and system based on multi-source data
CN116401532B (en) Method and system for recognizing frequency instability of power system after disturbance
Zhang et al. Fault diagnosis of oil-immersed power transformer based on difference-mutation brain storm optimized catboost model
Todeschini et al. An image-based deep transfer learning approach to classify power quality disturbances
Chen Review on supervised and unsupervised learning techniques for electrical power systems: Algorithms and applications
Zhang et al. Encoding time series as images: A robust and transferable framework for power system DIM identification combining rules and VGGNet
CN111652478B (en) Umbrella algorithm-based power system voltage stability evaluation misclassification constraint method
Du et al. Feature Selection-Based Low Voltage AC Arc Fault Diagnosis Method
Dawood et al. Power quality disturbance classification based on efficient adaptive Arrhenius artificial bee colony feature selection
CN117150393A (en) Power system weak branch identification method and system based on decision tree
CN116304918A (en) Substation equipment fault identification method and system based on depth forest algorithm
CN115684786A (en) Inverter switching tube health diagnosis method, device and system based on gram angular field and parallel CNN
Chu et al. A relaxed support vector data description algorithm based fault detection in distribution systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination