CN111798037B - Data-driven optimal power flow calculation method based on stacked extreme learning machine framework - Google Patents
Data-driven optimal power flow calculation method based on stacked extreme learning machine framework Download PDFInfo
- Publication number
- CN111798037B CN111798037B CN202010528642.4A CN202010528642A CN111798037B CN 111798037 B CN111798037 B CN 111798037B CN 202010528642 A CN202010528642 A CN 202010528642A CN 111798037 B CN111798037 B CN 111798037B
- Authority
- CN
- China
- Prior art keywords
- learning machine
- extreme learning
- stacked
- layer
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004364 calculation method Methods 0.000 title claims abstract description 26
- 238000013528 artificial neural network Methods 0.000 claims abstract description 21
- 239000011159 matrix material Substances 0.000 claims description 49
- 238000000034 method Methods 0.000 claims description 26
- 210000002569 neuron Anatomy 0.000 claims description 24
- 230000006870 function Effects 0.000 claims description 18
- 239000013598 vector Substances 0.000 claims description 15
- 238000010248 power generation Methods 0.000 claims description 8
- 230000002787 reinforcement Effects 0.000 claims description 8
- 230000009467 reduction Effects 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 3
- 238000000513 principal component analysis Methods 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 10
- 238000004458 analytical method Methods 0.000 abstract description 8
- 239000010410 layer Substances 0.000 description 63
- 238000012549 training Methods 0.000 description 12
- 230000000694 effects Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000012847 principal component analysis method Methods 0.000 description 3
- 239000002356 single layer Substances 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000007786 learning performance Effects 0.000 description 2
- 238000005293 physical law Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000035699 permeability Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for AC mains or AC distribution networks
- H02J3/04—Circuit arrangements for AC mains or AC distribution networks for connecting networks of the same frequency but supplied from different sources
- H02J3/06—Controlling transfer of power between connected networks; Controlling sharing of load between connected networks
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J2203/00—Indexing scheme relating to details of circuit arrangements for AC mains or AC distribution networks
- H02J2203/20—Simulating, e g planning, reliability check, modelling or computer assisted design [CAD]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E40/00—Technologies for an efficient electrical power generation, transmission or distribution
- Y02E40/70—Smart grids as climate change mitigation technology in the energy generation sector
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Marketing (AREA)
- Power Engineering (AREA)
- Water Supply & Treatment (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Supply And Distribution Of Alternating Current (AREA)
Abstract
The invention discloses a data-driven optimal power flow calculation method based on a stacked extreme learning machine frame, which mainly comprises the following steps: 1) Establishing an extreme learning machine model; 2) Stacking a plurality of extreme learning machine models layer by layer to establish a stacked extreme learning machine; 3) Optimizing the stacking type extreme learning machine to obtain an optimized stacking type extreme learning machine; 4) Establishing a data-driven optimal power flow learning frame; 5) And (5) calculating the data-driven optimal power flow learning framework by using the optimized stacked extreme learning machine. The invention can be widely applied to the problem of improving the calculation efficiency of the neural network algorithm in the analysis of the power system.
Description
Technical Field
The invention relates to the field of power system analysis and artificial intelligent machine learning, in particular to a data-driven optimal power flow calculation method based on a stacked extreme learning machine frame.
Background
Optimal Power Flow (OPF) of a power system is one of the most important tools in power system analysis, and is commonly used in key fields such as market clearing, network optimization, voltage control, power generation scheduling and the like. However, the nonlinearity and non-convexity of the OPF model results in excessive OPF computation burden, which makes it difficult to meet the demand of on-line computation of the power system. Although the power flow linearization method can improve the calculation efficiency to a certain extent, the method ignores key information such as voltage amplitude, reactive power and the like, and is difficult to ensure the global optimum of the optimization result. The existing OPF method mainly improves the calculation efficiency at the cost of reducing the calculation accuracy, so that a large number of samples need to be iterated in probability analysis, and the high uncertainty has a great hidden risk in a power system.
In recent years, data-driven methods have been widely used in power system analysis, including estimation of transfer distribution factors, jacobian matrices, admittance matrices, suppression of uncertainty, regression, and the like. Therefore, the neural network is widely applied, the performance of the neural network algorithm mainly depends on the selection of the super-parameters, and no effective algorithm is available at present to solve the problem of overlarge manual parameter adjustment cost caused by a large amount of super-parameters. Compared with the traditional neural network algorithm based on Back-Propagation (BP), the Stacked Extreme Learning Machine (SELM) is a new machine learning method based on a data-driven optimal power flow calculation method of a stacked extreme learning machine framework, can randomly generate input weights of hidden layer neurons, and can determine the output weights through simple matrix calculation. The method can obviously improve the training speed of the neural network and reduce the number of super parameters needing to be adjusted. Meanwhile, the method divides a larger-scale neural network into a plurality of small-scale neural networks which are calculated in series, so as to obtain less memory occupation and higher feature extraction capability. However, due to the randomness of the input weights and the analytical solution process of the output weights, SELM has limited learning ability, and a learning framework of SELM needs to be further designed to be suitable for complex OPF calculation.
The OPF is a data-driven optimal power flow calculation method based on a stacked extreme learning machine framework, which is a common method for an electric power system and obtains an optimal solution according to the running state of the system under the condition of considering various constraint conditions. By means of the machine learning technology, the complex relation between the running state of the system and the OPF optimization result is learned, and the calculation efficiency can be greatly improved. An efficient data-driven OPF calculation method requires not only high accuracy and speed, but also good generalization capability. Therefore, the SELM training is fast and the parameter adjustment is less, so that the requirement of OPF calculation can be met. However, the complexity of the relationship between OPF input (system operating state) and output (optimal power flow solution) makes it difficult for the original SELM to learn it effectively.
In view of the foregoing, there is a need to develop a SELM algorithm framework for combining high training speed and less parameter adjustment capability to overcome the above problems in a data-driven optimal power flow calculation method based on a stacked extreme learning machine framework.
Disclosure of Invention
The invention aims to provide a data-driven optimal power flow calculation method based on a stacked extreme learning machine frame, which mainly comprises the following steps:
1) Establishing an extreme learning machine model, which mainly comprises the following steps:
1.1 A) an input data set X is acquired.
1.2 An output matrix H of a model hiding layer of the extreme learning machine is established, namely:
H=g(WX+b) (1)
where H is the hidden layer output matrix. g (·) is the activation function. W is a randomly generated input weight matrix. b is a randomly generated bias vector.
1.3 Calculating an output weight matrix beta between the hidden layer and the output layer, namely:
β=(HTH)-1T (2)
In the formula, T represents a target matrix for learning of the extreme learning machine model.
2) Stacking a plurality of extreme learning machine models layer by layer to establish a stacked extreme learning machine (S extreme learning machine model).
Stacking a plurality of extreme learning machine models layer by layer, and mainly comprising the following steps of:
2.1 Dividing the stacked extreme learning machine into a plurality of sub-ELM neural networks stacked layer by using a PCA dimension reduction method, and randomly generating hidden layer neuron parameters of the first sub-ELM neural network. The number of hidden layer neurons generated at random is denoted as L m. The initial value of m is 1.
2.2 Using L2 regularization to optimize the hidden layer output matrix H m of the mth sub-ELM neural network, namely:
Where β m is the output weight matrix for the mth iteration and satisfies F m denotes an optimization function.
2.3 Calculating an output weight matrix beta m of the mth iteration, namely:
Where C is a penalty factor. H m is the hidden layer output matrix for the mth iteration.
2.4 The principal component analysis method is utilized to reduce the dimension of the output weight matrix beta m, and a feature vector matrix V m∈RL ×L is generated. The number of original hidden neurons is denoted as L m, and the number of hidden neurons after dimension reduction is denoted as L m. The feature vectors of the first l m columns are marked as
Reduced hidden layer output matrix after dimension reductionThe following is shown:
2.5 Randomly generating L m-lm hidden neurons and calculating a hidden layer output matrix H new with L m-lm hidden neurons.
2.6 Iteratively updating the hidden layer output matrix H m+1, namely:
2.7 Based on the hidden layer output matrix H m+1 updated by iteration, optimizing the feature vector matrix V m+1 to obtain an optimized feature vector And returns to step 3.2) until the iteration ends.
3) And establishing a data-driven optimal power flow learning framework based on the stacking type extreme learning machine.
The objective function F of the data-driven optimal power flow learning framework is as follows:
the constraints are shown in the formulas (8) to (12), respectively, that is:
Where PD i、QDi represents the load active and reactive power requirements, respectively. PF k、QFk、Vi、θi represents the state variables of the branch active power, branch reactive power, voltage amplitude and voltage phase angle, respectively. PG i、QGi represents control variables of the generator active power and reactive power output, respectively. F is an objective function of the system running cost at the best steady state. S G、SB、SK represents a system generator set, a node and a branch, respectively. i. j represents the node number. k represents the branch index. a 2i、a1i、a0i are the power generation cost coefficients, respectively. θ ij=θi-θj is the phase angle difference between the i-th and j-th nodes. G ij、Bij represents the conductance and susceptance between the i-th and j-th nodes, respectively. PF ij represents the active power state variables of the branches where node i and node j are located. * Representing a lower limit; The upper limit is indicated.
4) Setting a reinforcement layer in the stacking type extreme learning machine to strengthen the learning ability of the stacking type extreme learning machine frame; the reinforcement layer is a multi-supervision layer and is used for correcting errors of the stacked extreme learning machine.
5) And (3) calculating the data-driven optimal power flow learning framework by using the optimized stacked extreme learning machine, wherein the main steps are as follows:
5.1 Active power demand PD i and reactive power demand QD i into the stacked extreme learning machine, outputting branch active power PF k, branch reactive power QF k.
5.2 Branch active power PF k, branch reactive power QF k into the stacked extreme learning machine, output voltage magnitude state variable V i and voltage phase angle state variable θ i.
5.3 Active power demand PD i, reactive power demand QD i, branch active power PF k, branch reactive power QF k, voltage magnitude state variable V i, and voltage phase angle state variable θ i are input into the stacked extreme learning machine, outputting the generator active power output control variable PG i, reactive power output control variable QG i, and the objective function F of system running cost at optimum steady state.
It should be noted that the invention firstly builds a SELM algorithm framework through a stacked extreme learning mechanism with a multi-layer neural network structure. On the basis, the invention provides a data-driven optimal power flow learning framework based on OPF feature decomposition and error correction. Based on the model, a three-stage SELM neural network structure is built for improving learning accuracy. The learning targets are separated through the three-stage input-output layer design, so that the purposes of correcting learning errors and improving the SELM learning ability are achieved. The invention develops a data-driven OPF learning framework, and decomposes the characteristics of an OPF mathematical model into three stages so as to reduce the learning complexity and the learning error. The method particularly relates to contents such as feature decomposition, multi-parameter planning (MPP), network structure, optimal power flow, sample classification, stacked extreme learning machine and the like.
The technical effects of the invention are undoubtedly that the invention achieves the following technical effects:
1) The SELM mapping is directly used, so that time consumption caused by a large number of iterations in OPF calculation is avoided;
2) No system topology and parameters are required;
3) A high-quality solution can be obtained in a short time, and guide information is provided for improving the OPF calculation speed.
4) The invention constructs a data-driven OPF learning framework, thereby achieving the purposes of improving training speed and reducing super-parameter adjustment.
In conclusion, the method can be widely applied to the problem of improving the calculation efficiency of the neural network algorithm in the analysis of the power system.
Drawings
FIG. 1 is a data driven OPF learning framework of the method of the invention;
FIG. 2 is a schematic diagram of a three-stage SELM network structure and its reinforcement layer design according to the present invention;
FIG. 3 is a schematic diagram of a first stage enhancement mode of the present invention;
FIG. 4 is a comparison of IEEE 39 node system voltage magnitudes;
FIG. 5 is an IEEE 39 node system active power output error comparison;
Detailed Description
The present invention is further described below with reference to examples, but it should not be construed that the scope of the above subject matter of the present invention is limited to the following examples. Various substitutions and alterations are made according to the ordinary skill and familiar means of the art without departing from the technical spirit of the invention, and all such substitutions and alterations are intended to be included in the scope of the invention.
Example 1:
Referring to fig. 1 to 3, a data-driven optimal power flow (optimal power flow, OPF) calculation method based on a Stacked Extreme Learning Machine (SELM) framework mainly includes the following steps:
1) An Extreme Learning Machine (ELM) model is built, and the main steps are as follows:
1.1 A) an input data set X is acquired.
1.2 An output matrix H of a model hiding layer of the extreme learning machine is established, namely:
H=g(WX+b) (1)
where H is the hidden layer output matrix. g (·) is the activation function. W is a randomly generated input weight matrix. b is a randomly generated bias vector.
1.3 Calculating an output weight matrix beta between the hidden layer and the output layer, namely:
β=(HTH)-1T (2)
In the formula, T represents a target matrix for learning of the extreme learning machine model.
2) Stacking a plurality of extreme learning machine models layer by layer to establish a stacked extreme learning machine.
Stacking a plurality of extreme learning machine models layer by layer, and establishing a stacked extreme learning machine mainly comprises the following steps:
2.1 Dividing the stacked extreme learning machine into a plurality of sub-ELM neural networks stacked layer by using a PCA dimension reduction method (principal component analysis method, PRINCIPAL COMPONENT ANALYSIS), and randomly generating hidden layer neuron parameters of the first sub-ELM neural network. The number of hidden layer neurons generated at random is denoted as L m. The initial value of m is 1.
2.2 Since some parameters are obtained from the last layer after the dimension reduction, only some parameters are randomly generated. Information of input data is transferred from the first layer to the last layer, so that the hidden layer output matrix H m of the mth sub ELM neural network can be optimized by using L2 regularization, namely:
Where β m is the output weight matrix for the mth iteration and satisfies F m denotes an optimization function.
2.3 Calculating an output weight matrix beta m of the mth iteration, namely:
Where C is a penalty factor for making a trade-off between training error and output weight norm. H m is the hidden layer output matrix for the mth iteration. And m is the iteration number, namely the number of sub ELM neural networks.
2.4 The principal component analysis method is utilized to reduce the dimension of the output weight matrix beta m, and a feature vector matrix V m∈RL ×L is generated. The number of original hidden neurons is denoted as L m, and the number of hidden neurons after dimension reduction is denoted as L m. The feature vectors of the first l m columns are marked asR is a real number. L× L, L ×l represents the dimension.
Reduced hidden layer output matrix after dimension reductionThe following is shown:
2.5 Randomly generating L m-lm hidden neurons and calculating a hidden layer output matrix H new with L m-lm hidden neurons.
2.6 Iteratively updating the hidden layer output matrix H m+1, namely:
2.7 Based on the hidden layer output matrix H m+1 updated by iteration, optimizing the feature vector matrix V m+1 to obtain an optimized feature vector And returns to step 3.2) until the iteration ends.
3) And establishing a data-driven optimal power flow learning framework based on the stacking type extreme learning machine.
The objective function F of the data-driven optimal power flow learning framework is as follows:
the constraints are shown in the formulas (8) to (12), respectively, that is:
Where PD i、QDi represents the load active and reactive power requirements, respectively. PF k、QFk、Vi、θi represents the state variables of the branch active power, branch reactive power, voltage amplitude and voltage phase angle, respectively. PG i、QGi represents control variables of the generator active power and reactive power output, respectively. F is an objective function of the system running cost at the best steady state. S G、SB、SK represents a system generator set, a node and a branch, respectively. i. j represents the node number. k represents the branch index. a 2i、a1i、a0i are the power generation cost coefficients, respectively. θ ij=θi-θj is the phase angle difference between the i-th and j-th nodes. G ij、Bij represents the conductance and susceptance between the i-th and j-th nodes, respectively. PF ij represents the active power state variables of the branches where node i and node j are located. * Representing a lower limit; The upper limit is indicated. PG i, Lower and upper limits of control variables representing generator active power output.QG i represents the upper and lower limits of the control variable of the generator reactive power output.V i represents the upper and lower voltage amplitude limits.Representing the upper limit of the branch active power state variable.
The OPF model contains the physical information of the power system, including the power grid topology, line parameters and corresponding physical laws, etc. But the non-linearity and non-convexity of the OPF model result in the computation process requiring multiple iterations, consuming a lot of time. From a data-driven perspective, the calculation of OPF can be seen as a non-linear mapping: the system power requirement PD i,QDi is taken as input and the OPF calculation PG i,QGi,Vi,θi,PFk,QFk, F is taken as output. The mapping relation between the input and the output can be subjected to offline learning through historical data or analog data.
4) A reinforcement layer is provided in the stacked extreme learning machine to enhance learning ability of the stacked extreme learning machine frame. The reinforcement layer is a multi-supervision layer and is used for correcting errors of the stacked extreme learning machine. The multi-supervision layer is used for correcting errors of the single supervision layer. The model is used for separating learning targets, so that learning errors of the OPF model are corrected, learning accuracy is improved, as shown in fig. 2, each stage is provided with a hidden layer with a similar structure, and the model is provided with an enhancement mode with a multi-supervision layer structure, so that learning capacity of SELM is improved. Taking the first phase of the subnet as an example, as shown in fig. 3. The hidden layer of SELM includes two components:
a) SLEM regressions of single supervision layer;
b) SLEM regressions.
In actual case, the output generated by the input PD i,QDi May deviate from the true value PF k,QFk, so multiple supervisory layers are used to reduce PF k,QFk and PF in single layer SLEM learningErrors between them.
5) Referring to fig. 1, the optimized stacked extreme learning machine is utilized to calculate a learning framework of data driving optimal power flow ((optimal power flow, OPF)), and the main steps are as follows:
5.1 First stage (f: PD i,QDi→PFk,QFk): the branch active power and reactive branch power are learned, the branch power is separated from a complex OPF model and used for SELM learning, namely, the active power requirement PD i and the reactive power requirement QD i are input into a stacked extreme learning machine, and the branch active power PF k and the branch reactive power QF k are output. In fig. 1, P i、Qi represents the active power and the reactive power of the node i, respectively. P ij、Qij represents the active power and the reactive power of the branch where node i and node j are located, respectively.
5.2 A second stage (f: PF k,QFk→Vi,θi): the voltage amplitude and the voltage phase angle are learned, wherein physical information in a tide model is covered, including line parameter information, corresponding physical laws and the like, namely, the branch active power PF k and the branch reactive power QF k are input into a stacked extreme learning machine provided with a strengthening layer, and a voltage amplitude state variable V i and a voltage phase angle state variable theta i are output.
5.3 The third stage (f:PDi,QDi,PFk,QFk,Vi,θi→PGi,QGi,F): learns the control variables and objective function values, further improving learning performance, namely, inputting the active power demand PD i, the reactive power demand QD i, the branch active power PF k, the branch reactive power QF k, the voltage amplitude state variable V i and the voltage phase angle state variable θ i into a stacked extreme learning machine provided with a reinforcement layer, outputting the control variable PG i of the active power output of the generator, the control variable QG i of the reactive power output and the objective function F of the system running cost in the optimal steady state.
The core idea of the solution process is to reduce the learning difficulty of the OPF. The whole resolving process is based on ResNet design concept, and a path for directly connecting input and output is provided in the middle layer. The second stage and the third stage are an error correction process in the whole resolving process, and according to the state variable obtained from the first stage, the learning target of the model can be directly calculated through the tide model. The optimal S-extreme learning machine model is combined through three stages, learning errors are gradually reduced, and finally the precision requirement is met.
Example 2:
Referring to fig. 4 and 5, an experiment for verifying a data-driven optimal power flow calculation method based on a stacked extreme learning machine framework mainly comprises the following steps:
1) Building an experiment system: IEEE 39, 57, 118 node and polish 2383 node systems. When wind power generation and photovoltaic power generation are connected to different nodes, the wind power generation is compliant with the Weber distribution, and the photovoltaic power generation is compliant with the Beta distribution. The renewable energy permeability and the load fluctuation rate are shown in table 1, and the comparison results of various schemes are shown in table 2. The hardware and software used in the simulation is: intel i7-8700K CPU, 32G RAM, WINDOWS 10 and MATLAB2018b.
2) The accuracy index p is introduced to measure the accuracy of learning performance, i.e., the probability that the learning error is less than the threshold thr.
Wherein: And T is a predicted value and an actual value, respectively; for V and θ, the parameters for the threshold thr are 0.001p.u and 0.5 0; for PF, QF, PG and QG, the threshold thr is 1% of the mean value of itself in the training data; for the objective function value F, the threshold thr is 0.1% of the mean value of itself in the training data.
Table 1 uncertainty in the case
Table 2 results of various protocol comparisons
O represents considered x represents not considered
Table 3 improvements in IEEE 39 node systems
3) Evaluation analysis
To demonstrate the benefits achieved by the proposed method, we compared the learning effect of M3, M4, M5 in an IEEE 39 node system, as shown in table 3. The number of training samples and the number of test samples were 30000 and 10000, respectively. The number of hidden neurons L is 1000 and 100. The number of hidden neurons L is reduced to 100, which is 10% of the hidden neurons L. The number of iterations of single layer SELM was 10. The number of layers of the enhancement mode is 2. Penalty factor C is set to 230. From the comparison, the following conclusions can be drawn:
a) For M3 and M4, the original SELM direct learning is difficult to achieve better effect because of the complex OPF model. In order to solve the problem, the invention provides an OPF regression framework, which decomposes a learning task into three stages, reduces the learning complexity of the OPF model and obviously improves the learning effect of each variable.
B) For M4 and M5, the learning effect is improved due to the enhancement mode. In the enhanced mode, two supervision layers balance accuracy and network complexity, and learning accuracy is greatly improved.
The invention also discusses the error correction problem of the three-stage SELM network. The voltage amplitude of the generator (learning target of the second stage) and the active power output (learning target of the third stage) can be calculated by the physical power flow model using the output of the first stage (i.e., PF k,QFk) and the output of the second stage (i.e., V i,θi), respectively. For the voltage, the values obtained in the first stage and the second stage are compared with the actual values, as shown in fig. 4. The active power output errors obtained in the second and third phases are compared as shown in fig. 5. The voltage amplitude error has been corrected in the second stage, and the third stage is mainly used for correcting the errors of the control variables and the objective function values. The three stages improve the learning precision and meet the precision requirement.
4) Compared with the existing algorithm
The performance of M0, M1, M2 and M5 was compared in the present invention, and the results are shown in tables 4 and 5. The class parameter m is set to2 in the sample pre-classification and the voltage magnitude constraint is selected to identify the critical region. The super parameter settings are the same as for the V-B node for IEEE 39, 57, and 118 systems. The deep learning based network has 4 hidden layers with 400 nodes per layer. The pre-training and fine tuning were set to 200 and 500 iterations, respectively. For the Poland 2383 node system, the deep learning based network has 4 hidden layers, with 700 nodes per layer. In the algorithm provided by the invention, the training sample number and the test sample number are respectively set to 100000 and 10000. The hidden neurons L and the reduced hidden neurons L were set to 7000 and 700, respectively. The number of iterations of the single layer SELM is set to 30.
Table 4 learning effect comparison
Table 5 training versus test time
From the experimental results the following conclusions can be drawn:
a) In all OPF calculation cases, the method provided by the invention has the highest precision.
B) The neural network test time trained in M1, M2, and M5 is acceptable and much less than M0.
C) Due to the existence of the BP process, the training cost of the deep learning method is far higher than that of the algorithm of the invention. Meanwhile, the deep learning method requires adjustment of a large number of super parameters. In contrast, the present invention can be easily migrated to a different system with similar accuracy as long as the super parameters are slightly adjusted.
In summary, according to the data driving OPF method based on the SELM framework and the physical information provided by the present invention, the learning task is decomposed into three stages, so that the learning ability is improved, and the learning complexity is significantly reduced. Thus, the present invention may provide technical support for power system analysis.
Claims (1)
1. The data-driven optimal power flow calculation method based on the stacked extreme learning machine framework is characterized by mainly comprising the following steps of:
1) Establishing an extreme learning machine model;
2) Stacking a plurality of extreme learning machine models layer by layer to establish a stacked extreme learning machine;
3) Establishing a data driving optimal power flow learning framework based on a stacking type extreme learning machine;
4) Setting a reinforcement layer in the stacking type extreme learning machine to strengthen the learning ability of the stacking type extreme learning machine frame;
5) The stacked extreme learning machine is utilized to calculate the data driving optimal power flow learning frame;
the main steps of establishing the extreme learning machine model are as follows:
1.1 Acquiring an input data set X;
1.2 An output matrix H of a model hiding layer of the extreme learning machine is established, namely:
H=g(WX+b) (1)
wherein H is a hidden layer output matrix; g (·) is the activation function; w is a randomly generated input weight matrix; b is a randomly generated bias vector;
1.3 Calculating an output weight matrix beta between the hidden layer and the output layer, namely:
β=(HTH)-1T (2)
Wherein T represents a target matrix for learning of the extreme learning machine model;
Stacking a plurality of extreme learning machine models layer by layer, and establishing a stacked extreme learning machine mainly comprises the following steps:
2.1 Dividing the stacked extreme learning machine into a plurality of sub-ELM neural networks stacked layer by using a PCA dimension reduction method, and randomly generating hidden layer neuron parameters of a first sub-ELM neural network; the number of hidden layer neurons generated randomly is recorded as L m; the initial value of m is 1;
2.2 Using L2 regularization to optimize the hidden layer output matrix H m of the mth sub-ELM neural network, namely:
Where β m is the output weight matrix for the mth iteration and satisfies F m denotes an optimization function; c is a penalty factor; h m is the hidden layer output matrix for the mth iteration;
2.3 Calculating an output weight matrix beta m of the mth iteration, namely:
wherein, C is a penalty factor;
2.4 Using principal component analysis to reduce the dimension of the output weight matrix beta m and generating a feature vector matrix V m∈RL×L; the number of the original hidden neurons is marked as L m, and the number of the hidden neurons after dimension reduction is marked as L m; the feature vectors of the first l m columns are marked as
Reduced hidden layer output matrix after dimension reductionThe following is shown:
2.5 Randomly generating L m-lm hidden neurons, and calculating to obtain a hidden layer output matrix H new with L m-lm hidden neurons;
2.6 Iteratively updating the hidden layer output matrix H m+1, namely:
2.7 Based on the hidden layer output matrix H m+1 updated by iteration, optimizing the feature vector matrix V m+1 to obtain an optimized feature vector And returning to the step 2) until the iteration is finished;
the objective function F of the data-driven optimal power flow learning framework is as follows:
Wherein a 2i、a1i、a0i is a power generation cost coefficient; PG i represents a control variable of active power output of the generator; f is an objective function of the system running cost;
the constraints are shown in the formulas (8) to (12), respectively, that is:
PFk=PFij=ViVj(Gijcosθij+Bijsinθij)-Vi 2Gij (9)
wherein PD i、QDi represents the load active and reactive power requirements, respectively; PF k、QFk、Vi、θi represents the state variables of the branch active power, branch reactive power, voltage amplitude and voltage phase angle, respectively; PG i、QGi represents control variables of active power and reactive power output of the generator respectively; s G、SB、SK respectively represents a system generator set, a node and a branch; i. j represents a node number; k represents a branch index; θ ij=θi-θj is the phase angle difference between the i-th and j-th nodes; g ij、Bij represents the conductance and susceptance between the ith and jth nodes, respectively; PF ij represents the active power state variables of the branches where node i and node j are located; * Representing a lower limit; Representing an upper limit;
the reinforcement layer is a multi-supervision layer and is used for correcting errors of the stacked extreme learning machine;
the main steps of the method for calculating the data driving optimal power flow learning framework by using the stacking type extreme learning machine are as follows:
5.1 Active power demand PD i and reactive power demand QD i into the stacked extreme learning machine, outputting branch active power PF k, branch reactive power QF k;
5.2 Inputting the branch active power PF k and the branch reactive power QF k into a stacked extreme learning machine, and outputting a voltage amplitude state variable V i and a voltage phase angle state variable theta i;
5.3 Active power demand PD i, reactive power demand QD i, branch active power PF k, branch reactive power QF k, voltage magnitude state variable V i, and voltage phase angle state variable θ i are input into the stacked extreme learning machine, outputting the generator active power output control variable PG i, reactive power output control variable QG i, and the objective function F of system running cost at optimum steady state.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010528642.4A CN111798037B (en) | 2020-06-10 | 2020-06-10 | Data-driven optimal power flow calculation method based on stacked extreme learning machine framework |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010528642.4A CN111798037B (en) | 2020-06-10 | 2020-06-10 | Data-driven optimal power flow calculation method based on stacked extreme learning machine framework |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111798037A CN111798037A (en) | 2020-10-20 |
CN111798037B true CN111798037B (en) | 2024-08-06 |
Family
ID=72803203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010528642.4A Active CN111798037B (en) | 2020-06-10 | 2020-06-10 | Data-driven optimal power flow calculation method based on stacked extreme learning machine framework |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111798037B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12027852B2 (en) | 2021-09-23 | 2024-07-02 | City University Of Hong Kong | Deep learning-based optimal power flow solution with applications to operating electrical power systems |
CN113987940B (en) * | 2021-10-28 | 2024-02-23 | 国网山东省电力公司烟台供电公司 | ELM integrated online learning-based power system tide data driving regression method |
CN116316629B (en) * | 2022-12-27 | 2024-09-03 | 杭州电力设备制造有限公司 | A data-driven optimal power flow calculation method considering topological feature learning |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105095652B (en) * | 2015-07-10 | 2017-10-03 | 东北大学 | Sample component assay method based on stack limitation learning machine |
CN105160437A (en) * | 2015-09-25 | 2015-12-16 | 国网浙江省电力公司 | Load model prediction method based on extreme learning machine |
CN109871810A (en) * | 2019-02-22 | 2019-06-11 | 上海海事大学 | Analysis method of wave motion waveform based on extreme learning machine under Dropout constraint |
-
2020
- 2020-06-10 CN CN202010528642.4A patent/CN111798037B/en active Active
Non-Patent Citations (3)
Title |
---|
Data-driven alternating current optimal power flow: A Lagrange multiplier based approach;Xingyu Lei 等;《Energy Reports》;20221018;第8卷(第增刊8期);第748-755页 * |
Data-Driven Formulation of Chance-Constrained AC Optimal Power Flow Considering Optimization of AGC Participation Factors;Xingyu Lei 等;《SSRN》;20230829;第1-12页 * |
Data-Driven Optimal Power Flow A Physics-Informed Machine Learning Approach;Xingyu Lei 等;《IEEE TRANSACTIONS ON POWER SYSTEMS》;20200612;第36卷(第1期);第346-354页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111798037A (en) | 2020-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110059878B (en) | Photovoltaic power generation power prediction model based on CNN LSTM and construction method thereof | |
CN101414366B (en) | Method for forecasting electric power system short-term load based on method for improving uttermost learning machine | |
CN111159638B (en) | Distribution network load missing data recovery method based on approximate low-rank matrix completion | |
CN111798037B (en) | Data-driven optimal power flow calculation method based on stacked extreme learning machine framework | |
CN108665112A (en) | Photovoltaic fault detection method based on Modified particle swarm optimization Elman networks | |
CN111178616B (en) | Wind speed prediction method based on negative correlation learning and regularization extreme learning machine integration | |
CN109709802B (en) | Control Method of Active Electronic Ladder Circuit Based on Iterative Learning Control | |
CN112564098A (en) | High-proportion photovoltaic power distribution network voltage prediction method based on time convolution neural network | |
CN107480815A (en) | A kind of power system taiwan area load forecasting method | |
CN109995031B (en) | Probability power flow deep learning calculation method based on physical model | |
Yang et al. | Short-term photovoltaic power prediction with similar-day integrated by BP-AdaBoost based on the Grey-Markov model | |
CN111880489B (en) | Regression scheduling method for complex manufacturing system | |
CN115374692B (en) | A Two-tier Optimal Scheduling Decision-Making Method for Regional Integrated Energy System | |
CN108491925A (en) | The extensive method of deep learning feature based on latent variable model | |
CN109861211A (en) | A data-driven dynamic reconfiguration method for distribution network | |
CN114725944A (en) | A method and system for optimizing operation control of source and network load of power electronic distribution network | |
CN113887794A (en) | A method and device for reactive power optimization of distribution network | |
CN116960978A (en) | Offshore wind power power prediction method based on wind speed-power combination decomposition and reconstruction | |
CN115619025A (en) | Fuzzy Forecasting Method of Power Load Triangle Based on MEEMD and Optimal Combination Integration | |
CN105512755A (en) | Decomposition-based multi-objective distribution estimation optimization method | |
CN113536509A (en) | A microgrid topology identification method based on graph convolutional network | |
Aishwarya et al. | Prediction of time series data using GA-BPNN based hybrid ANN model | |
CN117574776A (en) | Task planning-oriented model self-learning optimization method | |
CN115793456A (en) | Lightweight sensitivity-based power distribution network edge side multi-mode self-adaptive control method | |
CN109408896B (en) | Multi-element intelligent real-time monitoring method for anaerobic sewage treatment gas production |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |