Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Relevance and Scale Dependence of Hydrological Changes in Glacierized Catchments: Insights from Historical Data Series in the Eastern Italian Alps
Previous Article in Journal
Modeling Pesticide and Sediment Transport in the Malewa River Basin (Kenya) Using SWAT
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Multiple Linear Regression, Artificial Neural Network, Extreme Learning Machine, and Support Vector Machine in Deriving Operation Rule of Hydropower Reservoir

1
Bureau of Hydrology, ChangJiang Water Resources Commission, Wuhan 430010, China
2
School of Hydropower and Information Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
3
Institute of Hydropower and Hydroinformatics, Dalian University of Technology, Dalian 116024, China
*
Author to whom correspondence should be addressed.
Water 2019, 11(1), 88; https://doi.org/10.3390/w11010088
Submission received: 26 November 2018 / Revised: 13 December 2018 / Accepted: 29 December 2018 / Published: 7 January 2019
(This article belongs to the Section Water Resources Management, Policy and Governance)

Abstract

:
Operation rule plays an important role in the scientific management of hydropower reservoirs, because a scientifically sound operating rule can help operators make an approximately optimal decision with limited runoff prediction information. In past decades, various effective methods have been developed by researchers all the over world, but there are few publications evaluating the performances of different methods in deriving the hydropower reservoir operation rule. To achieve satisfying scheduling process triggered by limited streamflow data, four methods are used to derive the operation rule of hydropower reservoirs, including multiple linear regression (MLR), artificial neural network (ANN), extreme learning machine (ELM), and support vector machine (SVM). Then, the data from 1952 to 2015 in Hongjiadu reservoir of China are chosen as the survey case, and several quantitative statistical indexes are adopted to evaluate the performances of different models. The radial basis function is chosen as the kernel function of SVM, while the sigmoid function is used in the hidden layer of ELM and ANN. The simulations show that three artificial intelligence algorithms (ANN, SVM, and ELM) are able to provide better performances than the conventional MLR and scheduling graph method. Hence, for scholars in the hydropower operation field, the applications of artificial intelligence algorithms in deriving the operation rule of hydropower reservoir might be a challenge, but represents valuable research work for the future.

1. Introduction

As a classical tool for adjusting natural runoff, reservoirs play an increasingly important role in the human society [1]. In practice, reservoirs need to satisfy a variety of practical requirements from various administrative departments, such as flood control, power generation, agricultural irrigation, water supply, and ecological protection [2]. In addition, booming socio-economic development has caused an unprecedented imbalance between water supply and water demand [3], and it is of great necessity to make the utmost of the regulation abilities of all the reservoirs [4]. As a result, the reservoir operation optimization has become one of the most significant tasks in water resources and power system over past decades [5]. In general, when the inflow per scheduling period is known, the global optimal solution for the reservoir operation problem can be easily obtained using the dynamic programming or other optimization methods [6]. Traditionally, this dispatching pattern is identified as the deterministic optimization and the corresponding scheduling result denotes the best solution found in this scenario [7]. Nevertheless, it is difficult to capture the perfect future runoff information because of the limitation of existing runoff forecasting technology. That is to say, the deterministic optimization is just a potential reflection for the fixed runoff case, but is not suitable for uncertain environments. In recent years, the fast-growing computer technology has markedly promoted the collection, processing, and storage of multi-source heterogeneous data produced in the entire life-cycle of a hydropower reservoir, which indicates that abundant data information is available to provide potential technical support for operators. Hence, a natural idea for handling the above issue is to examine the reservoir operation rule with actual data and planning data [8].
Implicit stochastic optimization (ISO) is a tool developed to achieve this goal. The key idea hidden in the ISO method is to derive the near-optimal reservoir operation rule from the long-term historical data [9]. Since its origin, ISO has attracted intensive attention from researchers all over the world and many effective methods have been developed to enhance the practicality of ISO [10]. Thus far, all of the existing methods can be roughly divided into two different groups [11]: the first includes traditional techniques like scheduling graph method (SGM) and multiple linear regressions (MLR); and the other is artificial intelligence (AI) approaches represented by artificial neural network (ANN), extreme learning machine (ELM), and support vector machine (SVM). The former involves classical methods, but they often fail to consider the latest operation data and deal with the complex nonlinearity between dependent variable and independent variables [12], while the latter can not only effectively alleviate the above defects, but also scientifically analyze large-scale dataset [13]. Over the past few decades, extensive applications of the AI-based methods have been published, because the AI-based methods can produce accurate results for a variety of engineering problems.
ANN is inspired by the working mechanism of the human brain and nervous system and has been widely applied to solve a variety of practical engineering problems. ANN can be treated as a special signal processing system with numerous interconnected layers linked by weight vectors between two neighboring layers. For instance, the authors of [14] used a particle swarm optimization model to train the parameters of ANN in stage prediction of Shing Mun River; the authors of [15] verified the feasibilities of support vector regression and ANN in river stage prediction; the authors of [16] developed a hybrid ANN method based on quantum-behaved particle swarm optimization for the daily runoff forecasting; the authors of [17] used ANN to forecast the ice conditions of the Yellow River in the inner Mongolia reach; the authors of [18] compared the performances of several AI-based methods (like ANN, and SVM) in monthly discharge predication; the authors of [19] made full use of ANN to forecast concurrent flows in a river system; and, based on ANN and SVM, the authors of [20] developed a hybrid forecasting method to effectively improve the forecast accuracy of monthly streamflow. Therefore, the above literatures indicate that ANN can provide reasonable results in water resources problems.
ELM is a novel training method for single-hidden layer feed-forward neural networks. After randomly determining the input-hidden weights and hidden biases, ELM can directly obtain the hidden-output weights by calculating the Moore–Penrose generalized inverse of the hidden output matrix. ELM has better generalization ability and a faster learning rate than the gradient-based method, promoting its widespread application in practice. For instance, the authors of [21] used wavelet neural networks and ELM to forecast monthly discharge; the authors of [22] used ELM and quantum-behaved particle swarm optimization to predict daily runoff; the author of [23] proposed a robust ELM method and then verified its feasibility in indoor positioning; the authors of [24] developed a weighted ELM for imbalance learning; the authors of [25] used a base-flow separation, binary-coded swarm optimization, and ELM for neural network river forecasting; the authors of [26] used binary-coded particle swarm optimization and ELM to develop a data-driven input variable selection method for rainfall-runoff modeling; and the authors of [27] developed a hybrid ELM model for multi-step short-term wind speed forecasting. Thus, existing simulations have fully demonstrated that ELM is a promising tool to address complicated regression and classification problems.
SVM is a supervised machine learning method based on the Vapnik–Chervonenkis dimension theory and structural risk minimization principle. It was proven in theory that SVM is able to guarantee global optimization for regression or classification problems. Recently, growing attention has been paid to the SVM method because it can produce satisfactory results in many engineering problems. For instance, the authors of [28] verified the predictability of monthly streamflow using SVM coupled with discrete wavelet transform and empirical mode decomposition; the authors of [29] used support vector machines for long-term discharge prediction; the authors of [30] developed a multi-objective ecological reservoir operation model based on an improved SVM model in which meteorological and hydrological data are used as the input information; the authors of [31] proposed an artificial bee colony method optimized SVM for system reliability analysis of slopes; and the authors of [32] used a modified SVM model based on the ensemble empirical mode decomposition to forecast the annual rainfall-runoff. Thus, various reports have fully proven feasibility of SVM in solving engineering problems.
Although a variety of reports on reservoir operation rule derivation has been published has been published over the past few decades, there are few publications evaluating the performances of different methods in deriving the hydropower reservoir operation rule thus far. Hence, in order to fill this gap, the primary goal of this paper is to compare the performances of several famous methods in deriving the reservoir operation rule, including the conventional scheduling graph method (SGM), MLR, ANN, ELM, and SVM. The Hongjiadu reservoir located in southwest China is chosen as the study area, and the effectiveness of five methods with different indexes is compared. The simulations show that three artificial intelligence methods (ANN, ELM, and SVM) are promising tools in deriving the reservoir operation rule when compared with SGM and MLR.
This rest of this paper is organized as follows. The deterministic hydropower reservoir operation is given in Section 2. Section 3 briefly presents the theories of several methods adopted in this study. The quantitative indexes, experimental results, and discussions are presented in Section 4, and the conclusions are given in Section 5.

2. Deterministic Hydropower Reservoir Operation to Produce Dataset

2.1. Objective Function

The scheduling process obtained from the deterministic optimization model is used to evaluate the performance of the derived reservoir operation rule. Considering that power generation is an important indicator to compare the management levels of different hydropower enterprises in a market environment, the objective function is often chosen to maximize of the multi-year average electric energy production in the target hydropower reservoir [33], which can be expressed as follows:
max E = i = 1 N j = 1 M P i , j t i , j g ( P i , j )
where E is the value of the objective function; N is the number of years; M is the number of periods per year (month here, i.e., M = 12 ); P i , j is the reservoir’s power output at the jth period of the ith year; t i , j is the total hours at the jth period of the ith year; and g ( P i , j ) denotes the penalty function, which can be described as below:
g ( P i , j ) = { a [ P i , j P i , j min ] b if   ( P i , j < P i , j min ) 0         otherwise
where P i , j min is the preset minimum power output, and a and b are two positive coefficients.

2.2. Operation Constraints

To ensure that all the variables vary in the feasible zones, the following equality or inequality constraints are considered in the modeling process [34,35,36], including the water balance equation, storage volume limits, water spillage limits, turbine discharge, and power output limits.
V i , j = V i , j 1 + [ I i , j ( q i , j + s i , j ) ] · t i , j ; i [ 1 , N ] , j [ 1 , M ]
V i , j min V i , j V i , j max ; i [ 1 , N ] , j [ 1 , M ]
q i , j min q i , j q i , j max ; i [ 1 , N ] , j [ 1 , M ]
s i , j min s i , j s i , j max ; i [ 1 , N ] , j [ 1 , M ]
P i , j min P i , j P i , j max ; i [ 1 , N ] , j [ 1 , M ]
where V i , j , I i , j , q i , j , and s i , j are the storage volume, local inflow, turbine discharge, and abandoned spillage at the jth period of the ith year, respectively. V i , j max and V i , j min are the maximum and minimum storage volume at the jth period of the ith year, respectively. q i , j max and q i , j min are the maximum and minimum turbine discharge at the jth period of the ith year, respectively. s i , j max and s i , j min are the maximum and minimum water spillage at the jth period of the ith year, respectively. P i , j max and P i , j min are the maximum and minimum power output at the jth period of the ith year, respectively.

2.3. Optimization Methods

When the long-term inflow series, initial storage, and terminal storage are known, the above optimization model will become a deterministic operation problem that can be easily resolved by the famous dynamic programming method [37,38]. Then, the corresponding dynamic programming recursive equation is given as below:
E i , j ( V i , j ) = max { e i , j ( V i , j , V i , j 1 ) + E i , j 1 ( V i , j 1 ) }
where e i , j ( V i , j , V i , j 1 ) is the objective function value at the jth period of the ith year and E i , j ( V i , j ) denotes the optimal cumulative return from the jth period of the ith year to the first period.

3. Brief Introductions of the Adopted Methods

Brief information of the four adopted methods is given in this section, including multiple linear regress (MLR), artificial neural network (ANN), extreme learning machine (ELM), and support vector machine (SVM). Because of its simple principle and easy implementation, MLR is seen as one of the most classical method in the reservoir operation rule field; with strong generalization and self-learning abilities, ANN is chosen as to derive reservoir operation rule; with faster training rate and better regression ability, ELM is also used for reservoir operation rule derivation; because of the merits of less computation parameters and theoretical completeness, SVM is also an alternative tool in deriving the reservoir operation rule. Besides, numerous mature software packages have been developed to achieve those methods, which can make an obvious improvement in the workload and execution efficiency.

3.1. Multiple Linear Regression (MLR)

Multiple linear regression (MLR) is a classical statistical tool develop to formulate the complex input–output relationship [39]. The key goal of MLR is to find out an approximation linear function between a set of independent variables and the dependent variable. Without loss of generality, the regression line in MLR can be expressed as follows:
y = β 0 + β 1 x 1 + + β i x i + + β k x k + ε
where y is the dependent variable, x i is the ith independent variable, β i is the polynomial coefficients of x i , k is the number of independent variables, and ε is the possible variation form.
Then, the above equation for a set of samples can be rewritten in a compact matrix form, which can be described as below:
Y = X β + ε
where
Y = [ y 1 y 2 y n ] ,   ε = [ ε 1 ε 2 ε n ] ,   β = [ β 0 β 1 β k ]   and   X = [ 1 x 1 , 1 x 1 , k 1 x 2 , 1 x 2 , k 1 x m , i 1 x n , 1 x n , k ] n × ( k + 1 )
where n is the number of samples, x m , i is the value of the ith independent variable in the mth sample, and ε i is the ith residual error in the mth sample.
Based on the classical matrix operation theory, the standard least-square method can be used to calculate the coefficient vector β associated in the MLR model, which is described as below:
β = ( X T X ) 1 X T Y
In such a way, the coefficient vector β is known and the obtained MLR model can be adopted to predict the possible dependent variable related with the newly input vector.

3.2. Artificial Neural Network (ANN)

ANNs have been widely used to alleviate the shortcomings of the conventional algorithms to deal with complex problems. Without knowing the accurate mathematical description about the underlying process to be addressed, ANNs can learn hidden knowledge from the assigned data samples via establishing an input–output mapping for simulations. By far, there are many different types of ANN variants in previous literatures [40]. Here, the feed-forward network based on the back propagation training method is the choice of this paper. The sketch map of the feed-forward ANN model is drawn in Figure 1. In the feed-forward ANN, there are often three kinds of layers that are composed of multiple interconnected neurons, including the input layer receiving the external signal, the hidden layer or layers processing data in an order way, and the output layer exporting the predictive result.
Two key procedures are involved in the training process of the feed-forward ANN: The first is the feed-forward procedure in which the information is delivered from the input layer to the output layer via all the hidden layers, and the other is the reverse procedure in which the overall derivatives of the objective function in terms of weights are scattered among all the nodes of the neural network, which means that the weights and biases of all the nodes are dynamically adjusted based on the error between the simulated values of the network and the target outputs. For any one node per layer, the transfer function is adopted to obtain the accumulated result by calculating the inner product of the input vector and the weight vector, which can be expressed in Equation (13). Then, the accumulated result is directly delivered to the next layer. In addition, the neurons in the previous layer are often linked with all the neurons in the next layer, whereas the connections for any two neurons in the same layer do not exist.
y = f { w · x + b }
where y is the output of the node; f is the transfer function of the node; b is the bias value of the node; and w and x denote the input vector and weight vector of the node, respectively.

3.3. Extreme Learning Machine (ELM)

Extreme learning machine (ELM) is an emerging optimization technique developed to train the single-hidden layer feed-forward neural networks (SLFNs) [41]. In ELM, after randomly generating the input weights and hidden biases in the preset range, the hidden-output weights can be obtained via the matrix multiplication of the generalized inverse of hidden output matrix and the targeted output matrix. For a set of training samples { ( x t , y t ) , x t R n , y t R m , t = 1 , 2 , , N } , the hidden outputs of the ELM model can be expressed as below:
f t = i = 1 L β i g ( α i · x t + b i ) = O t , t = 1 , 2 , , N
where α i R n is the weight vector linking the input layer and the ith hidden node, β i R m is the weight vector linking the ith hidden node and the output layer, b i R is the bias value of the ith hidden node, g ( · ) is the nonlinear activation function of the hidden node, L is the number of neurons in the hidden layer, and O t R m is the simulated output vector of the neural network.
Then, the above equation can be rewritten as follows:
H β = T
where
H = [ h ( x 1 ) h ( x N ) ] = [ g ( a 1 · x 1 + b 1 ) g ( a L · x 1 + b L ) g ( a 1 · x N + b 1 ) g ( a L · x N + b L ) ] N × L
β = [ β 1 T β L T ] L × m   and   T = [ y 1 T y L T ] N × m
where H denotes the output matrix of the hidden layer.
The optimization objective of ELM is to find appropriate parameters making t = 1 N O t y t = 0 hold. Then, the coefficient matrix β can be obtained by analytically determining the least-squared solution of the above-mentioned linear system min β H β T , and then the special solution can be expressed as follows:
β = H T
where H denotes the Moore–Penrose generalized inverse of the hidden layer output matrix.
Then, the learning procedures for the ELM method are summarized as below:
Step 1:
Define the amount of hidden neurons and the activation function of each neuron.
Step 2:
Produce the input-hidden weights as well as the hidden biases.
Step 3:
Use all the data samples to obtain the output matrix of the hidden layer.
Step 4:
Choose the suitable method to calculate the hidden-output weights.
Step 5:
Use the optimized ELM network to produce the simulated output for new samples.

3.4. Support Vector Machine (SVM)

As a famous technology based on statistical learning theory, the support vector machine (SVM) makes full use of the principle of structural risk minimization, rather than the classical empirical risk minimization in conventional methods, to guarantee the generalization capability of the regression model [42]. Figure 2 shows he sketch map of the SVM model. Supposing that the ith sample has a D-dimensional input vector x i R D and a scalar output y i R , then the following regression function can be employed to express the nonlinear input–output relationship in the SVM model:
f ( x i ) = w T φ ( x i ) + b ,   i = 1 , 2 , , l
where f ( x i ) denotes the predicated value of the SVM model, φ ( x i ) is the nonlinear mapping function, and w and b are the parameters of the SVM model to be optimized.
For the training dataset with l samples, the v-SVM optimization model for can be expressed as follows:
{ min   R ( w , ξ , ξ , ε ) = 1 2 w 2 + C [ v ε + 1 l i = 1 l ( ξ i + ξ i ) ] subjective   to : y i w T φ ( x i ) b ε + ξ i         w T φ ( x i ) + b y i ε + ξ i         ξ , ε 0
where C is the parameter used to balance the empirical risk and model complexity term w 2 , and ξ i is the slack variable to denote the distance of the ith sample outside of the ε -tube.
As a standard nonlinear constrained optimization problem, the above problem can be resolved by constructing the dual optimization problem based on the Lagrange multipliers technique:
{ max   R ( a i , a i * ) = i = 1 l y i ( a i * a i ) 1 2 i = 1 l j = 1 l ( a i a i * ) ( a j a j * ) K ( x i , x j ) subjective   to : i = 1 l ( a i a i * ) = 0         0 a i , a i * C / l         i = 1 l ( a i + a i * ) C · v
where K ( x i , x j ) is the kernel function satisfying the Mercer’s condition; and a i and a i * are the nonnegative Lagrange multipliers, respectively.
After obtaining the best solution for the dual optimization problem, the parameters of the SVM model are known and the regression form for an unknown input vector x is expressed as follows:
f ( x ) = i = 1 l ( a i a i ) K ( x i , x ) + b

4. Experimental Results

4.1. Study Area and Dataset

Here, the Hongjiadu reservoir located on the mainstream of Wu River in southwest China is chosen as the study site. This reservoir has a total drainage area of 9900 km2 and an average annual runoff of 4.89 billion m3. The dead water level is 1076 m and the dead storage is 1.14 billion m3; the normal water level is 1140 m and the corresponding storage volume is 4.5 billion m3. In Hongjiadu, the flood control level is 1138 m from 1 June to 1 September, while its regulation storage is about 3.4 billion m3. Obviously, the active-storage volume of the Hongjiadu reservoir is rather large in comparison with its annual inflow volume, meaning it plays a large role in determining the efficiencies to be achieved by any operation rules. Besides, the Hongjiadu reservoir has three mixed-flow turbine generating units with 200 MW per unit and its total installed capacity is 600 MW. Under normal circumstances, almost all of the flow of Hongjiadu is through the hydropower turbines. As a leading carry-over storage reservoir on the trunk stream of Wu River, the Hongjiadu reservoir begins to provide comprehensive benefits to promote the healthy and orderly development of Guizhou Province since being put into operation, like power generation, ecological protection, water supply, flood control, and environment governance. In practice, various scheduling purposes can be well addressed in the derived operating rule by setting the necessary constraints on some variables, like water levels, power outputs, or discharge rates [43,44,45,46].
The actual monthly streamflow data from January 1952 and December 2015 are collected from the watershed management organization of Wu River. Then, dynamic programing is employed to calculate the deterministic optimization results for Hongjiadu reservoir, and the minimum power output is defined as 150 MW. The optimal results (water level, inflow, and outflow) are drawn in Figure 3. For the optimized scheduling results, the first 50 years’ data are used to train the model, while those of the last 13 years are employed for testing. In addition, for the artificial intelligence algorithms (like ANN, ELM, and SVM), the numerical problem is often unavoidable if the smaller attribute values are dominated by the large ones. In order to effectively avoid the numerical difficulties in the modeling process, the normalization process in Equation (23) is adopted to make all the attribute values scale to the range of 0 and 1. All the results are obtained on a desktop computer with the Windows 7 operating system, Intel-core i7-3770 processor, and 4GB random access memory (RAM).
x ˜ i = x i min 1 i n { x i } max 1 i n { x i } min 1 i n { x i }
where x i and x ˜ i denote the original and normalized value of the target factor, respectively.
To be mentioned, the actual, rather than runoff-prediction, data are used by reservoir operators in deriving the candidate operation rules, in which the actual monthly inflow is an important input component and the monthly outflow is the key decision variable. When the obtained operation rule is used for production guidance, the future monthly runoff obtained by the real-time inflow rates available on a daily basis for the past months are used to determine the flows through the turbines and the abandoned flows through the spillway.

4.2. Performance Criterion

Here, two quantitative indicators are used to test the feasibility of different methods, including average power generation (APG) and generation guarantee rate (GGR). APG shows the simulated generation benefit of the target method in the long run, while GGR measures the assurance degree of the simulated power output larger than the preset minimum. Generally, the method with a larger value of the two indexes has better performance. The definitions of the two indexes are given as below:
A P G = 1 N i = 1 N j = 1 M P ˜ i , j t i , j
G G R = 1 N × M i = 1 N j = 1 M c i , j , c i , j = { 1   if ( P ˜ i , j P i , j min ) 0   otherwise
where P ˜ i , j is the simulated power output of the target method at the jth period of the ith year, and c i , j is the intermediate variable.

4.3. Model Development

4.3.1. MLR Model Development

Because of its simplicity and easy implementation, the linear operation rule is used for the purpose of comparison. The total discharge is chosen as the dependent variable, while the initial water level and inflow per period are chosen as two independent variables that are related to the dependent variable. The linear operation rule for Hongjiadu reservoir per month is expressed in Equation (26). Then, the parameters involved in the linear operation rule of Hongjiadu reservoir are obtained by the MLR method mentioned in the Section 3.1. Table 1 shows the obtained coefficients for the linear operation rule per month. It can be observed that three coefficients in 12 months are totally different from each other, demonstrating the complexity of reservoir operation.
O t = a + b × Z t 1 + c × I t , t = 1 , 2 , , 12
where O t is the total discharge at the tth month; I t is the total inflow at the tth month; Z t 1 is the initial water level at the tth month; and a , b , and c are three different parameters.

4.3.2. ANN Model Development

Here, the three-layer ANN model based on the back-propagation training method is used to derive the operation rule of Hongjiadu reservoir. All the hidden nodes use the sigmoid activation function, while the linear function is used in the output layer. Given that the number of nodes in the hidden layer has an important effect on the performance of the ANN model, the trial and error strategy is used to choose the best network structure. The training process will be terminated when the root-mean-square error (RMSE) of all the testing samples reaches the minimum. Figure 4 shows the performances of the testing dataset with the change of hidden nodes from 3 to 18. It can be found that the model performance is affected by the hidden neurons. When there are seven nodes in the hidden layer, the best performance in the testing dataset will be achieved. Thus, the number of hidden nodes is set as seven for Hongjiadu reservoir.

4.3.3. ELM Model Development

Similar to the above ANN model, the sigmoid and linear activation functions are adopted in the hidden layer and output layer of the ELM model, respectively. The amount of hidden nodes is two times of the number of input layers, while the quantum-behaved particle swarm optimization (QPSO) [16,22] is employed to search for the appropriate network parameters (including the input-hidden weights and hidden bias). The number of individuals and iterations in QPSO is set as 100, while the RMSE value is chosen as the indicator to compare the model parameters. Figure 5 illustrates the simulation results of the ELM model for Hongjiadu reservoir in 10 runs. It can be found that the ELM model in the fourth run has the optimal performance in both generation guarantee rate and average power generation. Thus, the corresponding model is chosen to derive the operation rule of Hongjiadu reservoir.

4.3.4. SVM Model Development

In general, the kernel function plays an important role in enhancing the SVM performance. Based on the previous publications, the radial basis function is seen as one of the most commonly used kernel functions because it has better generalization ability compared with other kernel functions. Hence, the radial basis function (RBF) in Equation (27) is chosen as the Kernel function. Obviously, there are three parameters ( C , γ , ε ) in the RBF function, as used in the SVM model. In order to obtain satisfying performance, the above-mentioned QPSO method is used to optimize those parameters. Based on the simulation results, the optimal parameter combination in the SVM model is set as (10.768, 0.456, 0.784) for operation rule derivation in Hongjiadu reservoir.
K ( x i , x j ) = exp ( γ x i x j 2 )
where γ is the kernel parameter to be optimized.

4.4. Comparison and Discussion

For the purpose of comparison, the traditional scheduling graph method (SGM) was chosen as the benchmark yardstick. Table 2 and Figure 6 show the detailed results of different approaches in Hongjiadu reservoir. It can be clearly observed that the dynamic programing method can obtain the best scheduling results in the deterministic case; four simulation-based methods (MLR, ANN, ELM, and SVM) are able to provide suboptimal results when compared with the dynamic programing method, but outperform SGM with respect to two statistical measures. On the other hand, as compared with SGM, MLR, ANN, and SVM, the ELM method can generate the best solution with approximately 9.00%, 7.57%, 3.03%, and 1.73% improvements in APG, respectively, while the generation assurance rate is improved by about 8.01%, 4.80%, 1.87%, and 0.27%, respectively. Hence, it can be concluded that the three AI-based methods can provide better results than the traditional SGM and MLR methods, and the operation rule derived by ELM has the best performance in the long-term simulations.
Figure 7 shows the average power output obtained by different methods for Hongjiadu reservoir. Figure 8 shows the water level of different methods for Hongjiadu reservoir in the testing samples. It can be found that the dynamic programming (DP) method can provide the most power generation in wet season but the least power generation in the dry season. This case indicates that in the ideal scheduling process, Hongjiadu reservoir can reduce the power generation in the dry season, and then use the abundant runoff to keep the reservoir operating at a high level, enhancing the operation efficiency of hydroelectric generators in the long run. Besides, the SGM method tends to smooth the power output in the long run because it fails to raise the water level in the wet season; the ELM method has a stronger capability than MLR, ANN, and SVM in mimicking the optimal scheduling process. Thus, the feasibilities of the solutions obtained by several methods are fully proven in this case.
Figure 9 draws the graphic models (outflow–inflow–water level) of four algorithms for Hongjiadu reservoir in August. The following conclusions for four methods can be deduced: when the water level is fixed, there is a positive relationship between power output and inflow; when the inflow is fixed, the reservoir tends to increase the power output with the increase of water level. On the other hand, there are obvious differences in the graphic models, while the gap among the average annual power generation of four methods is relatively small, demonstrating the equivalence for different combinations of parameters in the hydropower reservoir operation rule. Thus, operators should take the actual working condition of the hydropower reservoir into consideration when making the scheduling plan for production guidance.
From the above analysis, it can be clearly observed that the dynamic programming has the best performance, while the three artificial intelligence algorithms (ANN, ELM, and SVM) can provide better simulation results than SGM and MLR. The dynamic programing method divides the complicated multistage reservoir operation optimization problem into a series of relatively simple subproblems to be solved sequentially, and then seeks for the global optimal solution in the discrete state space, providing the best scheduling results for simulation [47,48,49]. In general, for the reservoir operation rule per month, there is often a strong nonlinearity between the independent variables (like water level and inflow) and dependent variables (like outflow). The conventional MLR method based on the simulation and optimization strategy can only handle conventional linearity, rather than the inherent nonlinear relationship in this problem, leading to lower generation benefit of the hydropower reservoir. The SGM approach based on the historical data and engineering experience cannot well consider the dynamic variation of reservoir runoff caused by climate change and human activities, reducing the overall operational efficiency of Hongjiadu reservoir. Three artificial intelligence algorithms get the utmost out of the mapping functions to map the training samples into the high-dimensional feature space, and then carefully choose the appropriate optimization strategies to find the solution that minimizes the total training error. As a result, three artificial intelligence algorithms have some unique merits in comparison with the SGM and MLR methods, including self-learning ability (training network parameters to simulate the complex nonlinear input–output relationship), generalization ability (possessing satisfying performance for new data samples), and fault-tolerant ability (behaving well for the partially damaged system), producing better performances than the two traditional methods. On the other hand, it seems that the ELM method is capable of obtaining the best performance among all the methods used to derive the reservoir operation rule. The difference in the adopted optimization principle per method is the key point leading to the fact that the performance of ELM is superior to both SVM and ANN. Specifically speaking, with a strong generalized ability for a variety of feature mappings, ELM is able to approximate any continuous functions by determining the global optima of the training samples [50], while the QPSO optimizer can effectively enhance the network compactness by carefully choosing the necessary parameters [51]; the traditional gradient-based ANN training method tends to fail into local optima with a relatively long learning time; and the SVM method can only provide the suboptimal solution with a higher computational complexity and more compact constraints. In addition, it should be pointed out that the performances of the three artificial intelligence methods may be different along with the change in the problem characteristics or research objects. To sum up, it can be concluded that in the reservoir operation rule derivation field, future research efforts can be directed to those artificial intelligence methods with promising simulation ability.

5. Conclusions

This study investigates the performances of four effective methods in deriving the operation rule of a hydropower reservoir, including MLR, ANN, ELM, and SVM. For the purpose of comparison, the conventional SGM approach was chosen as the benchmark yardstick. The historical streamflow data of Hongjiadu reservoir optimized by the dynamic programming method are adopted to develop those models. Two indexes are adopted to evaluate the performance of different methods, including average power generation and generation guarantee rate. The results indicate that three artificial intelligence algorithms (ANN, ELM, and SVM) provide better simulation performances than SGM and MLR. Therefore, the results show that the artificial intelligence methods are promising tools in deriving the operation rule of a hydropower reservoir. To be mentioned, the performances of ANN, ELM, and SVM vary with the change of parameter combination, and it is of great importance to develop effective tools to choose appropriate model parameters.
Besides, the amount of reservoir operation data will increase with the passage of time, which directly affects the major decision (monthly outflow that is equivalent to the abandoned spillage and turbine discharge) and the core function (like generation benefit) involved in the operation rules. Thus, the update frequency of reservoir operating rules can be set to a monthly basis in practice. On the other hand, in many parts of the world, climate change is creating non-stationary conditions in the flow-rate and streams. These non-stationary conditions show up as time trends in the annual flow-rate of streams and change over time in the pattern of month-by-month contributions to the annual streamflow [52,53,54,55], which will lead to certain significant impacts on reservoir operation. Because of the limited time and energy, the check for time trends in the inflow data of the survey region was not made, but this work is necessary for any future application of the methods presented here for operating-rule selection. Thus, in the future, we will deepen the research on the operation optimization of a hydropower reservoir in the changing environment.

Author Contributions

All authors contributed extensively to the work presented in this paper. Z.-K.F. and W.-J.N. contributed to modeling and finalized the manuscripts. B.-F.F. and Y.-W.M. contributed to data analysis. C.-T.C. and J.-Z.Z. contributed to the literature review.

Funding

This paper is supported by the National Key R&D Program of China (2017YFC0405406), National Natural Science Foundation of China (51709119), Natural Science Foundation of Hubei Province (2018CFB573), and Fundamental Research Funds for the Central Universities (HUST: 2017KFYXJJ193).

Acknowledgments

The writers would like to express appreciation to both editors and reviewers for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Liao, S.L. Hydropower system operation optimization by discrete differential dynamic programming based on orthogonal experiment design. Energy 2017, 126, 720–732. [Google Scholar] [CrossRef]
  2. Ming, B.; Chang, J.X.; Huang, Q.; Wang, Y.M.; Huang, S.Z. Optimal operation of Multi-Reservoir system Based-On cuckoo search algorithm. Water Resour. Manag. 2015, 29, 5671–5687. [Google Scholar] [CrossRef]
  3. Madani, K. Game theory and water resources. J. Hydrol. 2010, 381, 225–238. [Google Scholar] [CrossRef]
  4. Niu, W.J.; Feng, Z.K.; Cheng, C.T.; Wu, X.Y. A parallel multi-objective particle swarm optimization for cascade hydropower reservoir operation in southwest china. Appl. Soft Comput. 2018, 70, 562–575. [Google Scholar] [CrossRef]
  5. Madani, K.; Lund, J.R.; Krone, R.B. Innovative modelling for Califomian high hydro. Int. Water Power Dam Constr. 2012, 64, 34–36. [Google Scholar]
  6. Zhang, Y.; Jiang, Z.; Ji, C.; Sun, P. Contrastive analysis of three parallel modes in multi-dimensional dynamic programming and its application in cascade reservoirs operation. J. Hydrol. 2015, 529, 22–34. [Google Scholar] [CrossRef]
  7. Li, X.; Wei, J.; Li, T.; Wang, G.; Yeh, W.W.G. A parallel dynamic programming algorithm for multi-reservoir system optimization. Adv. Water Resour. 2014, 67, 1–15. [Google Scholar] [CrossRef]
  8. Liu, P.; Li, L.; Chen, G.; Rheinheimer, D.E. Parameter uncertainty analysis of reservoir operating rules based on implicit stochastic optimization. J. Hydrol. 2014, 514, 102–113. [Google Scholar] [CrossRef]
  9. Liu, P.; Guo, S.; Xu, X.; Chen, J. Derivation of Aggregation-Based joint operating rule curves for cascade hydropower reservoirs. Water Resour. Manag. 2011, 25, 3177–3200. [Google Scholar] [CrossRef]
  10. Ji, C.M.; Zhou, T.; Huang, H.T. Operating rules derivation of Jinsha reservoirs system with parameter calibrated support vector regression. Water Resour. Manag. 2014, 28, 2435–2451. [Google Scholar] [CrossRef]
  11. Yang, G.; Guo, S.; Liu, P.; Li, L.; Liu, Z. Multiobjective cascade reservoir operation rules and uncertainty analysis based on PA-DDS algorithm. J. Water Res. Plan. Man. 2017, 143, 04017025. [Google Scholar] [CrossRef]
  12. Wang, Y.; Guo, S.L.; Yang, G.; Hong, X.J.; Hu, T. Optimal early refill rules for Danjiangkou Reservoir. Water Sci. Eng. 2014, 7, 403–419. [Google Scholar]
  13. Ma, C.; Lian, J.; Wang, J. Short-term optimal operation of Three-gorge and Gezhouba cascade hydropower stations in non-flood season with operation rules from data mining. Energ. Convers. Manag. 2013, 65, 616–627. [Google Scholar] [CrossRef]
  14. Chau, K.W. Particle swarm optimization training algorithm for ANNs in stage prediction of Shing Mun River. J. Hydrol. 2006, 329, 363–367. [Google Scholar] [CrossRef] [Green Version]
  15. Wu, C.L.; Chau, K.W.; Li, Y.S. River stage prediction based on a distributed support vector regression. J. Hydrol. 2008, 358, 96–111. [Google Scholar] [CrossRef] [Green Version]
  16. Cheng, C.; Niu, W.; Feng, Z.; Shen, J.; Chau, K. Daily reservoir runoff forecasting method using artificial neural network based on quantum-behaved particle swarm optimization. Water 2015, 7, 4232–4246. [Google Scholar] [CrossRef]
  17. Wang, T.; Yang, K.; Guo, Y. Application of artificial neural networks to forecasting ice conditions of the yellow river in the inner Mongolia reach. J. Hydrol. Eng. 2008, 13, 811–816. [Google Scholar]
  18. Wang, W.C.; Chau, K.W.; Cheng, C.T.; Qiu, L. A comparison of performance of several artificial intelligence methods for forecasting monthly discharge time series. J. Hydrol. 2009, 374, 294–306. [Google Scholar] [CrossRef] [Green Version]
  19. Choudhury, P.; Roy, P. Forecasting concurrent flows in a river system using ANNs. J. Hydrol. Eng. 2015, 20, 06014012. [Google Scholar] [CrossRef]
  20. Cheng, C.T.; Feng, Z.K.; Niu, W.J.; Liao, S.L. Heuristic methods for reservoir monthly inflow forecasting: A case study of Xinfengjiang Reservoir in Pearl river, China. Water 2015, 7, 4477–4495. [Google Scholar] [CrossRef]
  21. Li, B.; Cheng, C. Monthly discharge forecasting using wavelet neural networks with extreme learning machine. Sci. China Technol. Sci. 2014, 57, 2441–2452. [Google Scholar] [CrossRef]
  22. Niu, W.; Feng, Z.; Cheng, C.; Zhou, J. Forecasting daily runoff by extreme learning machine based on quantum-behaved particle swarm optimization. J. Hydrol. Eng. 2018, 23, 04018002. [Google Scholar] [CrossRef]
  23. Lu, X.; Zou, H.; Zhou, H.; Xie, L.; Huang, G.B. Robust extreme learning machine with its application to indoor positioning. IEEE Trans. Cybern. 2016, 46, 194–205. [Google Scholar] [CrossRef] [PubMed]
  24. Zong, W.; Huang, G.B.; Chen, Y. Weighted extreme learning machine for imbalance learning. Neurocomputing 2013, 101, 229–242. [Google Scholar] [CrossRef]
  25. Taormina, R.; Chau, K.W.; Sivakumar, B. Neural network river forecasting through baseflow separation and binary-coded swarm optimization. J. Hydrol. 2015, 529, 1788–1797. [Google Scholar] [CrossRef]
  26. Taormina, R.; Chau, K.W. Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines. J. Hydrol. 2015, 529, 1617–1632. [Google Scholar] [CrossRef]
  27. Li, C.; Xiao, Z.; Xia, X.; Zou, W.; Zhang, C. A hybrid model based on synchronous optimisation for multi-step short-term wind speed forecasting. Appl. Energy 2018, 215, 131–144. [Google Scholar] [CrossRef]
  28. Zhu, S.; Zhou, J.; Ye, L.; Meng, C. Streamflow estimation by support vector machine coupled with different methods of time series decomposition in the upper reaches of Yangtze River, China. Environ. Earth Sci. 2016, 75, 531. [Google Scholar] [CrossRef]
  29. Lin, J.Y.; Cheng, C.T.; Chau, K.W. Using support vector machines for long-term discharge prediction. Hydrol. Sci. J. 2006, 51, 599–612. [Google Scholar] [CrossRef] [Green Version]
  30. Yu, Y.; Wang, P.; Wang, C.; Qian, J.; Hou, J. Combined monthly inflow forecasting and multiobjective ecological reservoir operations model: Case study of the Three Gorges Reservoir. J. Water Res. Plan. Manag. 2017, 143, 05017004. [Google Scholar] [CrossRef]
  31. Kang, F.; Li, J. Artificial bee colony algorithm optimized support vector regression for system reliability analysis of slopes. J. Comput. Civ. Eng. 2016, 30, 04015040. [Google Scholar] [CrossRef]
  32. Wang, W.C.; Xu, D.M.; Chau, K.W.; Chen, S. Improved annual rainfall-runoff forecasting using PSO-SVM model based on EEMD. J. Hydroinform. 2013, 15, 1377–1390. [Google Scholar] [CrossRef]
  33. Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Lund, J.R. Optimizing hydropower reservoirs operation via an orthogonal progressive optimality algorithm. J. Water Resour. Plan. Manag. 2018, 144, 4018001. [Google Scholar] [CrossRef]
  34. Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Wu, X.Y. Peak operation of hydropower system with parallel technique and progressive optimality algorithm. Int. J. Electr. Power Energy Syst. 2018, 94, 267–275. [Google Scholar] [CrossRef]
  35. Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Wu, X.Y. Optimization of large-scale hydropower system peak operation with hybrid dynamic programming and domain knowledge. J. Clean. Prod. 2018, 171, 390–402. [Google Scholar] [CrossRef]
  36. Feng, Z.K.; Niu, W.J.; Cheng, C.T. Optimizing electrical power production of hydropower system by uniform progressive optimality algorithm based on two-stage search mechanism and uniform design. J. Clean. Prod. 2018, 190, 432–442. [Google Scholar] [CrossRef]
  37. Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Wu, X.Y. Optimization of hydropower system operation by uniform dynamic programming for dimensionality reduction. Energy 2017, 134, 718–730. [Google Scholar] [CrossRef]
  38. Chang, J.; Wang, X.; Li, Y.; Wang, Y.; Zhang, H. Hydropower plant operation rules optimization response to climate change. Energy 2018, 160, 886–897. [Google Scholar] [CrossRef]
  39. Wang, S.; Huang, G.H.; He, L. Development of a clusterwise-linear-regression-based forecasting system for characterizing DNAPL dissolution behaviors in porous media. Sci. Total Environ. 2012, 433, 141–150. [Google Scholar] [CrossRef]
  40. Chau, K.W.; Wu, C.L.; Li, Y.S. Comparison of several flood forecasting models in Yangtze River. J. Hydrol. Eng. 2005, 10, 485–491. [Google Scholar] [CrossRef]
  41. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef] [Green Version]
  42. Huang, S.Z.; Chang, J.X.; Huang, Q.; Chen, Y.T. Monthly streamflow prediction using modified EMD-based support vector machine. J. Hydrol. 2014, 511, 764–775. [Google Scholar]
  43. Feng, Z.K.; Niu, W.J.; Cheng, C.T. Optimization of hydropower reservoirs operation balancing generation benefit and ecological requirement with parallel multi-objective genetic algorithm. Energy 2018, 153, 706–718. [Google Scholar] [CrossRef]
  44. Feng, Z.K.; Niu, W.J.; Cheng, C.T. Optimal allocation of hydropower and hybrid electricity injected from inter-regional transmission lines among multiple receiving-end power grids in china. Energy 2018, 162, 444–452. [Google Scholar] [CrossRef]
  45. Feng, Z.K.; Niu, W.J.; Wang, S.; Cheng, C.T.; Jiang, Z.Q.; Qin, H.; Liu, Y. Developing a successive linear programming model for head-sensitive hydropower system operation considering power shortage aspect. Energy 2018, 155, 252–261. [Google Scholar] [CrossRef]
  46. Chang, J.; Meng, X.; Wang, Z.; Wang, X.; Huang, Q. Optimized cascade reservoir operation considering ice flood control and power generation. J. Hydrol. 2014, 519, 1042–1051. [Google Scholar] [CrossRef]
  47. Niu, W.J.; Feng, Z.K.; Cheng, C.T. Optimization of variable-head hydropower system operation considering power shortage aspect with quadratic programming and successive approximation. Energy 2018, 143, 1020–1028. [Google Scholar] [CrossRef]
  48. Feng, Z.K.; Niu, W.J.; Cheng, C.T.; Zhou, J.Z. Peak shaving operation of hydro-thermal-nuclear plants serving multiple power grids by linear programming. Energy 2017, 135, 210–219. [Google Scholar] [CrossRef]
  49. Feng, Z.K.; Niu, W.J.; Zhou, J.Z.; Cheng, C.T.; Qin, H.; Jiang, Z.Q. Parallel multi-objective genetic algorithm for short-term economic environmental hydrothermal scheduling. Energies 2017, 10, 163. [Google Scholar] [CrossRef]
  50. Huang, G.B.; Zhou, H.; Ding, X.; Zhang, R. Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2012, 42, 513–529. [Google Scholar] [CrossRef]
  51. Feng, Z.K.; Niu, W.J.; Cheng, C.T. Multi-objective quantum-behaved particle swarm optimization for economic environmental hydrothermal energy system scheduling. Energy 2017, 131, 165–178. [Google Scholar] [CrossRef]
  52. Chang, J.; Wang, Y.; Istanbulluoglu, E.; Bai, T.; Huang, Q.; Yang, D.; Huang, S. Impact of climate change and human activities on runoff in the Weihe River Basin, China. Quat. Int. 2015, 380–381, 169–179. [Google Scholar] [CrossRef]
  53. Feng, Z.K.; Niu, W.J.; Zhou, J.Z.; Cheng, C.T.; Zhang, Y.C. Scheduling of short-term hydrothermal energy system by parallel multi-objective differential evolution. Appl. Soft Comput. 2017, 61, 58–71. [Google Scholar] [CrossRef]
  54. Zimmer, C.A.; Heathcote, I.W.; Whiteley, H.R.; Schroter, H. Low-Impact-Development practices for stormwater: Implications for urban hydrology. Can. Water Resour. J. 2007, 32, 193–212. [Google Scholar] [CrossRef]
  55. Abu-Zreig, M.; Rudra, R.P.; Lalonde, M.N.; Whiteley, H.R.; Kaushik, N.K. Experimental investigation of runoff reduction and sediment removal by vegetated filter strips. Hydrol. Process. 2004, 18, 2029–2037. [Google Scholar] [CrossRef]
Figure 1. The sketch map of the artificial neural network (ANN) model.
Figure 1. The sketch map of the artificial neural network (ANN) model.
Water 11 00088 g001
Figure 2. The sketch map of the support vector machine (SVM) model.
Figure 2. The sketch map of the support vector machine (SVM) model.
Water 11 00088 g002
Figure 3. Deterministic optimization results by dynamic programing for Hongjiadu reservoir in different periods (month).
Figure 3. Deterministic optimization results by dynamic programing for Hongjiadu reservoir in different periods (month).
Water 11 00088 g003
Figure 4. Sensitivity of the number of hidden nodes in the ANN method for Hongjiadu reservoir. RMSE—root-mean-square error.
Figure 4. Sensitivity of the number of hidden nodes in the ANN method for Hongjiadu reservoir. RMSE—root-mean-square error.
Water 11 00088 g004
Figure 5. Simulation results of the extreme learning machine (ELM) model for Hongjiadu reservoir in 10 runs. GGR—generation guarantee rate; APG—average power generation.
Figure 5. Simulation results of the extreme learning machine (ELM) model for Hongjiadu reservoir in 10 runs. GGR—generation guarantee rate; APG—average power generation.
Water 11 00088 g005
Figure 6. Comparison of different methods for Hongjiadu reservoir. DP—dynamic programming; MLR—multiple linear regression; SGM—scheduling graph method.
Figure 6. Comparison of different methods for Hongjiadu reservoir. DP—dynamic programming; MLR—multiple linear regression; SGM—scheduling graph method.
Water 11 00088 g006
Figure 7. Average power output obtained by different methods for Hongjiadu reservoir.
Figure 7. Average power output obtained by different methods for Hongjiadu reservoir.
Water 11 00088 g007
Figure 8. Water level of different methods for Hongjiadu reservoir.
Figure 8. Water level of different methods for Hongjiadu reservoir.
Water 11 00088 g008
Figure 9. Graphic models (outflow–inflow–water level) for Hongjiadu reservoir in August: (a) DP; (b) SVM; (c) ELM; (d) ANN.
Figure 9. Graphic models (outflow–inflow–water level) for Hongjiadu reservoir in August: (a) DP; (b) SVM; (c) ELM; (d) ANN.
Water 11 00088 g009aWater 11 00088 g009b
Table 1. Some parameters involved in the linear operation rule of Hongjiadu reservoir.
Table 1. Some parameters involved in the linear operation rule of Hongjiadu reservoir.
CoefficientMonth
1357911
a740.9966.6−205.9−7001.22698.66297.8
b−0.54−0.730.306.30−2.34−5.49
c−0.040.020.580.500.730.84
Table 2. Comparison of different methods in Hongjiadu reservoir. DP—dynamic programming; MLR—multiple linear regression; ANN—artificial neural network; ELM—extreme learning machine; SVM—support vector machine; SGM—scheduling graph method; GGR—generation guarantee rate; APG—average power generation.
Table 2. Comparison of different methods in Hongjiadu reservoir. DP—dynamic programming; MLR—multiple linear regression; ANN—artificial neural network; ELM—extreme learning machine; SVM—support vector machine; SGM—scheduling graph method; GGR—generation guarantee rate; APG—average power generation.
MethodDPSGMMLRANNELMSVM
APG (108 kWh)23.3821.0321.3622.4123.1122.71
Gap (%)-−10.05−8.64−4.15−1.15−2.87
GGR (%)98.1889.8492.9795.8397.6697.40
Gap (%)-−8.49−5.31−2.39−0.53−0.79
Note: Gap = (Method − DP)/DP × 100%; Gap denotes the gap between method and DP.

Share and Cite

MDPI and ACS Style

Niu, W.-J.; Feng, Z.-K.; Feng, B.-F.; Min, Y.-W.; Cheng, C.-T.; Zhou, J.-Z. Comparison of Multiple Linear Regression, Artificial Neural Network, Extreme Learning Machine, and Support Vector Machine in Deriving Operation Rule of Hydropower Reservoir. Water 2019, 11, 88. https://doi.org/10.3390/w11010088

AMA Style

Niu W-J, Feng Z-K, Feng B-F, Min Y-W, Cheng C-T, Zhou J-Z. Comparison of Multiple Linear Regression, Artificial Neural Network, Extreme Learning Machine, and Support Vector Machine in Deriving Operation Rule of Hydropower Reservoir. Water. 2019; 11(1):88. https://doi.org/10.3390/w11010088

Chicago/Turabian Style

Niu, Wen-Jing, Zhong-Kai Feng, Bao-Fei Feng, Yao-Wu Min, Chun-Tian Cheng, and Jian-Zhong Zhou. 2019. "Comparison of Multiple Linear Regression, Artificial Neural Network, Extreme Learning Machine, and Support Vector Machine in Deriving Operation Rule of Hydropower Reservoir" Water 11, no. 1: 88. https://doi.org/10.3390/w11010088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop