Nothing Special   »   [go: up one dir, main page]

CN107590565A - A kind of method and device for building building energy consumption forecast model - Google Patents

A kind of method and device for building building energy consumption forecast model Download PDF

Info

Publication number
CN107590565A
CN107590565A CN201710806517.3A CN201710806517A CN107590565A CN 107590565 A CN107590565 A CN 107590565A CN 201710806517 A CN201710806517 A CN 201710806517A CN 107590565 A CN107590565 A CN 107590565A
Authority
CN
China
Prior art keywords
influence factor
neural network
network training
training model
main influence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710806517.3A
Other languages
Chinese (zh)
Other versions
CN107590565B (en
Inventor
宋扬
官泽
孔祥旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shougang Automation Information Technology Co Ltd
Original Assignee
Beijing Shougang Automation Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shougang Automation Information Technology Co Ltd filed Critical Beijing Shougang Automation Information Technology Co Ltd
Priority to CN201710806517.3A priority Critical patent/CN107590565B/en
Publication of CN107590565A publication Critical patent/CN107590565A/en
Application granted granted Critical
Publication of CN107590565B publication Critical patent/CN107590565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiments of the invention provide a kind of method and device for building building energy consumption forecast model, method includes:Obtain energy consumption factor set;It is classified as linearly related factor of influence set and nonlinear correlation factor of influence set;Corresponding Bayesian network model is built respectively;First main affecting factors, the first non-principal factor of influence, the second main affecting factors and the second non-principal factor of influence are grouped into based on the corresponding Bayesian network model;Build each BP neural network training pattern;Based on training sample data, each BP neural network training pattern is trained respectively;Based on default test sample data, inspection is predicted to each BP neural network training pattern after training respectively, exports prediction result value;If the error of prediction result value is in default error range, the energy consumption forecast model of the output linearity relative influence factor and the energy consumption forecast model of nonlinear correlation factor of influence.

Description

Method and device for constructing building energy consumption prediction model
Technical Field
The invention belongs to the technical field of data analysis in the building industry, and particularly relates to a method and a device for constructing a building energy consumption prediction model.
Background
The analysis of the building energy consumption trend has been a research hotspot of numerous scholars at home and abroad for many years, and no matter which analysis method is adopted, the description of key influence factors in an uncertain energy consumption system is lacked, so that when the energy consumption trend is predicted, the prediction precision is low, and the energy consumption trend cannot be accurately predicted.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides a method and a device for constructing a building energy consumption prediction model, which are used for solving the technical problem of low prediction precision when the building energy consumption trend is predicted in the prior art.
The embodiment of the invention provides a method for constructing a building energy consumption prediction model, which comprises the following steps:
acquiring building prior data, and acquiring an energy consumption influence factor set based on the prior data;
classifying the energy consumption influence factors, and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set;
respectively constructing corresponding Bayesian network models for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner;
respectively determining a first main influence factor, a first non-main influence factor and a second main influence factor in the linear correlation influence factor set and a second non-main influence factor in the non-linear correlation influence factor set based on the corresponding Bayesian network model;
constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor;
acquiring training sample data, and training the first BP neural network training model and the second BP neural network training model respectively based on the training sample data;
respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data, and outputting prediction result values;
and judging whether the error of the prediction result value is within a preset error range, and if so, outputting the energy consumption prediction model of the linear correlation influence factor and the energy consumption prediction model of the nonlinear correlation influence factor.
In the above scheme, the first BP neural network training model is constructed based on the preprocessed first main influence factor and the preprocessed first non-main influence factor; before constructing a second BP neural network training model based on the preprocessed second main influence factor and the preprocessed second non-main influence factor, the method comprises the following steps:
and performing ashing pretreatment and normalization pretreatment on the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor.
In the above solution, the determining a first main influence factor, a first non-main influence factor, and a second main influence factor and a second non-main influence factor in the linear correlation influence factor set respectively based on the corresponding bayesian network model includes:
respectively calculating probability distribution of each node in the directed acyclic graph in corresponding Bayes network models, and respectively acquiring relative weight of each node based on the probability distribution; each node corresponds to each influence factor;
and respectively determining the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor according to the relative weight of each influence factor.
In the above scheme, the variables of the first BP neural network training model and the second BP neural network training model include:
the number of nodes of an input layer is n, the number of nodes of an implicit layer is p, and the number of nodes of an output layer is q;
the learning precision is epsilon, and the maximum learning frequency is M;
the hidden layer input weight is wihThe output weight of the hidden layer is who(ii) a The threshold value of each node of the hidden layer is bhThe threshold value of each node of the output layer is bo
The activation function is
The error function isThe yooIs any one of the output vectors of the output layer, doAny one vector in the expected output vectors;
the input vector is x ═ x1,x2,…,xn);
The expected output vector is d ═ d (d)1,d2,…,dq);
The input vector of the hidden layer is hi ═ hi (hi)1,hi2,…,hip);
The output vector of the hidden layer is ho ═ ho (ho)1,ho2,…,hop);
The input vector of the output layer is yi ═ y (yi)1,yi2,…,yiq);
The output vector of the output layer is yo ═ yo (yo)1,yo2,…,yoq)。
In the foregoing scheme, the acquiring training sample data includes:
segmenting the data sequences of the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor after the normalization preprocessing respectively to form n data segments with the length of m +1 and coincidence; each data segment is a training sample data.
In the foregoing solution, the training the first BP neural network training model and the second BP neural network training model based on the training sample data respectively includes:
respectively determining partial derivatives delta of the error function to each node of the output layers of the first BP neural network training model and the second BP neural network training modelo
Respectively determining partial derivatives-delta of the error function to each node of hidden layers of the first BP neural network training model and the second BP neural network training modelh
Respectively using partial derivatives delta of each node of the output layeroAnd hohCorrecting the output weight of the hidden layer to be who
Respectively using partial derivative-delta of each node of the hidden layerhAnd xiCorrecting the input weight w of the hidden layerih(ii) a Said xiIs any node in the input layer of the corresponding BP neural network model.
In the foregoing solution, the predicting the trained first BP neural network training model and second BP neural network training model based on preset test sample data, and outputting a prediction result value includes:
utilizing a normalization function based on the test sample dataReversely analyzing, and outputting the test sample data after the primary reduction;said xmaxFor the maximum value in the test sample data sequence, xminIs the minimum value in the test sample data sequence;
using ashing reduction functionSecondarily restoring the test sample data subjected to the primary restoration, and outputting the test sample data subjected to the secondary restoration;
and respectively predicting the first BP neural network training model and the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a prediction result value.
In the foregoing solution, the predicting the first BP neural network training model and the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a prediction result value includes:
predicting the first BP neural network training model based on the test sample data after the secondary reduction, and outputting a first prediction result value;
predicting the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a second prediction result value;
respectively processing the first prediction result value and the second prediction result value by using an inverse normalization processing function and a whitening processing function to obtain a first prediction value and a second prediction value;
and fitting the first predicted value and the second predicted value by using a linear regression function to obtain the predicted result value.
The embodiment of the invention also provides a device for constructing the building energy consumption prediction model, which comprises:
the acquisition unit is used for acquiring prior building data and acquiring an energy consumption influence factor set based on the prior data;
the classification unit is used for classifying the energy consumption influence factors and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set;
a first constructing unit, configured to construct a corresponding bayesian network model for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner, respectively;
a determining unit, configured to determine, based on the respective bayesian network models, a first primary influence factor, a first non-primary influence factor, a second primary influence factor, and a second non-primary influence factor in the set of non-linear correlation influence factors, respectively;
the second construction unit is used for constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor;
the training unit is used for acquiring training sample data and respectively training the first BP neural network training model and the second BP neural network training model based on the training sample data;
the prediction unit is used for respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data and outputting prediction result values;
and the output unit is used for judging whether the error of the prediction result value is within a preset error range or not, and outputting the energy consumption prediction model of the linear correlation influence factor and the energy consumption prediction model of the nonlinear correlation influence factor if the error of the prediction result value is within the preset error range.
In the above scheme, the apparatus further comprises: the preprocessing unit is used for constructing a first BP neural network training model based on the preprocessed first main influence factor and the preprocessed first non-main influence factor in the second constructing unit; before a second BP neural network training model is constructed based on the second main influence factors and the second non-main influence factors after pretreatment, ashing pretreatment and normalization pretreatment are carried out on the first main influence factors, the first non-main influence factors, the second main influence factors and the second non-main influence factors.
The embodiment of the invention provides a method and a device for a building energy consumption prediction model, wherein the method comprises the following steps: acquiring prior data, and acquiring an energy consumption influence factor set based on the prior data; classifying the energy consumption influence factors, and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set; respectively constructing corresponding Bayesian network models for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner; respectively determining a first main influence factor, a first non-main influence factor and a second main influence factor in the linear correlation influence factor set and a second non-main influence factor in the non-linear correlation influence factor set based on the corresponding Bayesian network model; constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor; acquiring training sample data, and training the first BP neural network training model and the second BP neural network training model respectively based on the training sample data; respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data, and outputting prediction result values; judging whether the error of the predicted result value is within a preset error range, and if the error of the predicted result value is within the preset error range, outputting an energy consumption prediction model of the linear correlation influence factor and an energy consumption prediction model of the nonlinear correlation influence factor; therefore, the Bayesian network model can be used for acquiring main key factors from multiple building factor influence events, namely main influence factors of building energy consumption; and continuously training and predicting and checking by using the BP neural network training model to obtain an energy consumption prediction model approximate to the fitting degree of real data, so that the prediction precision of the building energy consumption prediction model can be improved, and the trend of building energy consumption can be accurately predicted.
Drawings
Fig. 1 is a schematic flow chart of a method for constructing a building energy consumption prediction model according to an embodiment of the present invention;
fig. 2 is an overall schematic diagram of building energy consumption prediction model construction provided in the second embodiment of the present invention.
Detailed Description
In order to solve the technical problem of low prediction precision in the prediction of the building energy consumption trend in the prior art, the invention provides a method for constructing a building energy consumption prediction model, which comprises the following steps: acquiring prior data, and acquiring an energy consumption influence factor set based on the prior data; classifying the energy consumption influence factors, and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set; respectively constructing corresponding Bayesian network models for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner; respectively determining a first main influence factor, a first non-main influence factor and a second main influence factor in the linear correlation influence factor set and a second non-main influence factor in the non-linear correlation influence factor set based on the corresponding Bayesian network model; constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor; acquiring training sample data, and training the first BP neural network training model and the second BP neural network training model respectively based on the training sample data; respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data, and outputting prediction result values; and judging whether the error of the prediction result value is within a preset error range, and if so, outputting the energy consumption prediction model of the linear correlation influence factor and the energy consumption prediction model of the nonlinear correlation influence factor.
The technical solution of the present invention is further described in detail by the accompanying drawings and the specific embodiments.
Example one
The embodiment provides a method for constructing a building energy consumption prediction model, as shown in fig. 1, the method includes:
s101, building prior data are obtained, and an energy consumption influence factor set is obtained based on the prior data;
in the step, firstly, building prior data needs to be obtained, and an energy consumption influence factor set is obtained based on the prior data.
And then classifying the energy consumption influence factors, specifically, determining whether normalization and ashing treatment are adopted or not according to the acquired energy consumption influence factor set and the data distribution of each factor, respectively performing first-order linear regression fitting analysis on each pretreated factor and an energy consumption value, and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set through a linear relation.
S102, respectively combining the linear correlation influence factor set and the nonlinear correlation influence factor set to construct a corresponding Bayesian network model;
after the energy consumption influence factors are divided into a linear correlation influence factor set and a nonlinear correlation influence factor set, the incidence relation between the factors in the linear correlation influence factor set and the nonlinear correlation influence factor set is respectively obtained based on the prior data, and then corresponding Bayesian network models are respectively constructed according to the relation between the factors.
Theoretically, the bayesian network model is composed of a directed acyclic graph. The directed acyclic graph is G (I, E), wherein I is a set of all nodes, and E is a set of directed connecting line segments. Let XiA random variable representing one point I in the point set I, and the random variable set of the point set I is represented as X ═ XiI belongs to I, and if the joint probability of X can be expressed as shown in formula (1), the directed acyclic graph is called to form a Bayesian network model for G.
In the formula (1), thepa(i)Representing the parent of node i.
Accordingly, for any random variable (any node), its probability distribution can be multiplied by the respective local conditional probability distribution, as shown in equation (2):
p(x1,x2…,xk)=p(xk|x1,x2…,xk-1)…p(x2|x1)p(x1) (2)
then based on the probability distribution, the relative weights of each node in the directed acyclic graph can be calculated.
S103, respectively determining a first main influence factor, a first non-main influence factor and a second main influence factor in the linear correlation influence factor set and a second non-main influence factor in the non-linear correlation influence factor set based on the corresponding Bayesian network model;
in this step, after the relative weight of each node is calculated, because each node corresponds to each influence factor, the relative weight of each influence factor is correspondingly obtained, and then the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor are respectively determined according to the relative weight of each influence factor. The relative weight of the first main influence factor is the influence factor with the maximum relative weight in the linear correlation influence factor set, and accordingly, the other influence factors are the first non-main influence factors. The relative weight of the second main influence factor is the influence factor with the maximum relative weight in the nonlinear correlation influence factor set, and correspondingly, the other influence factors are the second non-main influence factors.
S104, constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor;
after determining a first main influence factor, a first non-main influence factor, a second main influence factor and a second non-main influence factor in the linear correlation influence factor set, the ashing pretreatment and the normalization pretreatment are performed on the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor.
Specifically, taking the first major influence factor as an example, assume that the original sequence of the first major influence factor is:
X(0)={X(0)(1),X(0)(2)…X(0)(n)}
the sequence generated by the first accumulation is:
X(1)={X(1)(1),X(1)(2)…X(1)(n)}
wherein,
let Z(1)Is X(1)Then the following sequence is generated:
Z(1)=Z(1)(2),Z(1)(3)…Z(1)(n),
Z(1)(k)=0.5(x(1)(k)+x(1)(k-1)),
the gray differential equation model for the ashing process model GM (1,1) is then:
X(0)(k)+az(1)(k)=b
note the bookThen the least squares estimation parameter of the gray differential equation is satisfied
Wherein,
then, it can be calledThe whitening equation of (1).
In summary, the gray differential equation X of GM (1,1) can be calculated(0)(k)+az(1)(k) The time series of b is:
then the reduction equation (whitening) after ashing, also known as the ashing reduction function, is
This makes it possible to perform ashing processing on the original sequence.
Then, carrying out normalization processing on the data after ashing treatment:
using formulas in particularNormalizing input data to [ -1,1]Within an interval, wherein, at this time, x ismaxIs the maximum value in the first main influence shadow data sequence, xminThe data sequence is the minimum value in the first main influence shadow data sequence, y is the data obtained after preprocessing, and the input data is the data sequence of the first main influence factor after normalization processing.
Likewise, the first non-primary influencing factor, the second primary influencing factor and said second non-primary influencing factor may be pre-processed in the same way.
Then constructing a first BP neural network training model based on the preprocessed first main influence factor and the preprocessed first non-main influence factor; and constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor.
Here, the first BP neural network training model and the second BP neural network training model have the same structure, and both include: an input layer, a hidden layer, and an output layer.
The variables of the first BP neural network training model and the second BP neural network training model comprise:
the number of nodes of an input layer is n, the number of nodes of an implicit layer is p, and the number of nodes of an output layer is q;
the learning precision is epsilon, and the maximum learning frequency is M;
the hidden layer input weight is wihThe output weight of the hidden layer is who(ii) a The threshold value of each node of the hidden layer is bhThe threshold value of each node of the output layer is bo
The activation function is
The error function isThe yooIs any one of the output vectors of the output layer, doAny one vector in the expected output vectors;
the input vector is x ═ x1,x2,…,xn);
The expected output vector is d ═ d (d)1,d2,…,dq);
The input vector of the hidden layer is hi ═ hi (hi)1,hi2,…,hip);
The output vector of the hidden layer is ho ═ ho (ho)1,ho2,…,hop);
The input vector of the output layer is yi ═ y (yi)1,yi2,…,yiq);
The output vector of the output layer is yo ═ yo (yo)1,yo2,…,yoq)。
S105, acquiring training sample data, and training the first BP neural network training model and the second BP neural network training model respectively based on the training sample data;
in this step, after the first BP neural network training model and the second BP neural network training model are constructed, training sample data needs to be acquired, and based on the training sample data, the first BP neural network training model and the second BP neural network training model are trained respectively.
Specifically, segmenting the data sequences of the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor after the normalization preprocessing respectively to form n data segments with the length of m +1 and coincidence; each data segment is a training sample data. The input bit is the value of the first m moments, the output bit is the value of the m +1 th moment, and a sample matrix which constructs repeated data segments is gradually pushed forward (the sample matrix is n rows and m +1 columns);
adding the sample matrix row training after the segmentation processing into each training model, and performing output calculation and back propagation calculation output, wherein the back propagation calculation is used for error correction; wherein the back propagation calculation comprises:
respectively determining partial derivatives delta of the error function to each node of the output layers of the first BP neural network training model and the second BP neural network training modelo(ii) a Respectively determining partial derivatives-delta of the error function to each node of hidden layers of the first BP neural network training model and the second BP neural network training modelh(ii) a Respectively using partial derivatives delta of each node of the output layeroAnd the output value ho of each node of the hidden layerhCorrecting the hidden layer output weight value to be who(ii) a h is any value from 0 to p; respectively using partial derivative-delta of each node of the hidden layerhAnd xiCorrecting the input weight w of the hidden layerih(ii) a Said xiCorresponding any node in the input layer of the corresponding BP neural network model to the corresponding Bayesian network model isAn influence factor.
And after the correction, the first BP neural network training model and the second BP neural network training model are saved.
S106, predicting the trained first BP neural network training model and second BP neural network training model respectively based on preset test sample data, and outputting a prediction result value; and judging whether the error of the prediction result value is within a preset error range, and if so, outputting the energy consumption prediction model of the linear correlation influence factor and the energy consumption prediction model of the nonlinear correlation influence factor.
In this step, the predicted test sample data is obtained and used as input data in the first BP neural network training model and the second BP neural network training model, and the first BP neural network training model and the second BP neural network training model perform inverse normalization processing and whitening processing on the test sample data to output the restored test sample data. The restored test sample data is only data which is not preprocessed.
Specifically, based on the test sample data, a normalization function is utilizedPerforming inverse normalization, namely inverse analysis, and outputting the test sample data after primary reduction; at this time, the xmaxFor the maximum value in the test sample data sequence, xminIs the minimum value in the test sample data sequence;
using the ashing reduction function (equation)Secondarily restoring the test sample data subjected to the primary restoration, and outputting the test sample data subjected to the secondary restoration;
then, based on the test sample data after the secondary reduction, predicting the first BP neural network training model and the second BP neural network training model respectively, and outputting a prediction result value, including:
predicting the first BP neural network training model based on the test sample data after the secondary reduction, and outputting a first prediction result value;
predicting the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a second prediction result value;
respectively processing the first prediction result value and the second prediction result value by using an inverse normalization processing function and a whitening processing function to obtain a first prediction value and a second prediction value;
and fitting the first predicted value and the second predicted value by using a linear regression function to obtain a fitting result value, namely a predicted result value, judging whether the error of the predicted result value is within a preset error range or not by taking the actual building energy consumption value as a reference, and if the error of the predicted result value is within the preset error range or the accuracy of the predicted result value reaches at least 90%, outputting the fitting result value as the energy consumption influence predicted value. And simultaneously outputting the energy consumption prediction model of the linear correlation influence factor, the energy consumption prediction model of the nonlinear correlation influence factor, the first main influence factor and the second main influence factor.
If the error of the predicted result value is not within the preset error range, resetting the learning precision, the learning times, the input weight of the hidden layer and the output weight of the hidden layer of the first BP neural network training model and the second BP neural network training model by taking the actual building energy consumption value as the reference to form a new first BP neural network training model and a new second BP neural network training model, training and predicting the first BP neural network training model and the second BP neural network training model again according to the same method to obtain a new predicted result value, outputting the final energy consumption prediction model of the linear correlation influence factor and the final energy consumption prediction model of the nonlinear correlation influence factor until the error of the predicted result value is within the preset error range, and outputting a first main influence factor and a second main influence factor.
Example two
Corresponding to the first embodiment, the present embodiment provides an apparatus for constructing a building energy consumption prediction model, as shown in fig. 2, the apparatus includes: the device comprises an acquisition unit 21, a classification unit 22, a first construction unit 23, a determination unit 24, a second construction unit 25, a training unit 26, a prediction unit 27, an output unit 28 and a preprocessing unit 29; wherein,
first the obtaining unit 21 is configured to obtain a priori data based on which a set of energy consumption impact factors is obtained.
The classifying unit 22 is configured to classify the energy consumption influencing factors, specifically, determine whether to adopt normalization and ashing processing according to the acquired energy consumption influencing factor set and data distribution of each factor, perform first-order linear regression fitting analysis on each preprocessed factor and the energy consumption value, and divide the energy consumption influencing factors into a linear correlation influencing factor set and a nonlinear correlation influencing factor set through a linear relationship.
After the classifying unit 22 classifies the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set, the first constructing unit 23 is configured to respectively obtain association relations between the factors in the linear correlation influence factor set and the nonlinear correlation influence factor set based on the prior data, and then respectively construct corresponding bayesian network models according to the relations between the factors.
Theoretically, the bayesian network model is composed of a directed acyclic graph. The directed acyclic graph is G (I, E), wherein I is a set of all nodes, and E is a set of directed connecting line segments. Let XiA random variable representing one point I in the point set I, and the random variable set of the point set I is represented as X ═ XiI belongs to I, and if the joint probability of X can be expressed as shown in formula (1), the directed acyclic graph is called to form a Bayesian network model for G.
In the formula (1), thepa(i)Representing the parent of node i.
Accordingly, for any random variable (any node), its probability distribution can be multiplied by the respective local conditional probability distribution, as shown in equation (2):
p(x1,x2…,xk)=p(xk|x1,x2…,xk-1)…p(x2|x1)p(x1) (2)
then based on the probability distribution, the relative weights of each node in the directed acyclic graph can be calculated.
Accordingly, after the relative weight of each node is calculated, because each node corresponds to each influence factor, the determining unit 24 correspondingly obtains the relative weight of each influence factor, and then determines the first main influence factor, the first non-main influence factor, the second main influence factor, and the second non-main influence factor according to the relative weight of each influence factor. The relative weight of the first main influence factor is the influence factor with the maximum relative weight in the linear correlation influence factor set, and accordingly, the other influence factors are the first non-main influence factors. The relative weight of the second main influence factor is the influence factor with the maximum relative weight in the nonlinear correlation influence factor set, and correspondingly, the other influence factors are the second non-main influence factors.
Then pre-processing unit 29 is used to perform ashing pre-processing and normalization pre-processing on the first dominant impact factor, the first non-dominant impact factor, the second dominant impact factor and the second non-dominant impact factor.
Specifically, taking the first major influence factor as an example, assume that the original sequence of the first major influence factor is:
X(0)={X(0)(1),X(0)(2)…X(0)(n)}
the sequence generated by the first accumulation is:
X(1)={X(1)(1),X(1)(2)…X(1)(n)}
wherein,
let Z(1)Is X(1)Then the following sequence is generated:
Z(1)=Z(1)(2),Z(1)(3)…Z(1)(n),
Z(1)(k)=0.5(x(1)(k)+x(1)(k-1)),
the gray differential equation model for the ashing process model GM (1,1) is then:
X(0)(k)+az(1)(k)=b
note the bookThen the least squares estimation parameter of the gray differential equation is satisfied
Wherein,
then, it can be calledIs X(0)(k)+az(1)(k) Whitening equation for b.
In summary, the gray differential equation X of GM (1,1) can be calculated(0)(k)+az(1)(k) The time series of b is:
then the reduction equation (whitening) after ashing is
This makes it possible to perform ashing processing on the original sequence.
Then, carrying out normalization processing on the data after ashing treatment:
using formulas in particularInput data sum normalization to [ -1,1]Within the interval, wherein, the xmaxIs the maximum value in the first main influence shadow data sequence, xminThe data sequence is the minimum value in the first main influence shadow data sequence, y is the data obtained after preprocessing, and the input data is the data sequence of the first main influence factor after normalization processing.
Likewise, the preprocessing unit 29 may preprocess the first non-primary influencing factor, the second primary influencing factor and the second non-primary influencing factor in the same way.
Furthermore, the second constructing unit 25 may construct a first BP neural network training model based on the first major influencing factor and the first non-major influencing factor; and constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor.
Here, the first BP neural network training model and the second BP neural network training model have the same structure, and both include: an input layer, a hidden layer, and an output layer.
The variables of the first BP neural network training model and the second BP neural network training model comprise:
the number of nodes of an input layer is n, the number of nodes of an implicit layer is p, and the number of nodes of an output layer is q;
the learning precision is epsilon, and the maximum learning frequency is M;
the hidden layer input weight is wihThe output weight of the hidden layer is who(ii) a The threshold value of each node of the hidden layer is bhThe threshold value of each node of the output layer is bo
The activation function is
The error function isThe yooIs any one of the output vectors of the output layer, doAny one vector in the expected output vectors;
the input vector is x ═ x1,x2,…,xn);
The expected output vector is d ═ d (d)1,d2,…,dq);
The input vector of the hidden layer is hi ═ hi (hi)1,hi2,…,hip);
The output vector of the hidden layer is ho ═ ho (ho)1,ho2,…,hop);
The input vector of the output layer is yi ═ y (yi)1,yi2,…,yiq);
The output vector of the output layer is yo ═ yo (yo)1,yo2,…,yoq)。
After the first BP neural network training model and the second BP neural network training model are constructed, the training unit 26 is configured to obtain training sample data, and train the first BP neural network training model and the second BP neural network training model based on the training sample data, respectively.
Specifically, the training unit 26 segments the data sequences of the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor after the normalization preprocessing, respectively, to form n data segments with length of m +1 and overlapping; each data segment is a training sample data. The input bit is the value of the first m moments, the output bit is the value of the m +1 th moment, and a sample matrix which constructs repeated data segments is gradually pushed forward (the sample matrix is n rows and m +1 columns);
adding the sample matrix row training after the segmentation processing into each training model, and performing output calculation and back propagation calculation output, wherein the back propagation calculation is used for error correction; wherein the back propagation calculation comprises:
respectively determining partial derivatives delta of the error function to each node of the output layers of the first BP neural network training model and the second BP neural network training modelo(ii) a Respectively determining partial derivatives-delta of the error function to each node of hidden layers of the first BP neural network training model and the second BP neural network training modelh(ii) a Respectively using partial derivatives delta of each node of the output layeroAnd hidden layers eachOutput value ho of nodehCorrecting the output weight of the hidden layer to be who(ii) a h is any value from 0 to p; respectively using partial derivative-delta of each node of the hidden layerhAnd xiCorrecting the input weight w of the hidden layerih(ii) a Said xiThe influence factor is corresponding to any node in the input layer of the corresponding BP neural network model and is corresponding to the corresponding Bayesian network model.
And after the correction, the first BP neural network training model and the second BP neural network training model are saved.
The prediction unit 27 is configured to predict the trained first BP neural network training model and second BP neural network training model respectively based on preset test sample data, and output a prediction result value;
in particular, the prediction unit 27 utilizes a normalization function based on the test sample dataPerforming inverse normalization, namely inverse analysis, and outputting the test sample data after primary reduction; said xmaxFor the maximum value in the test sample data sequence, xminIs the minimum value in the test sample data sequence;
using the ashing reduction function (equation)Secondarily restoring the test sample data subjected to the primary restoration, and outputting the test sample data subjected to the secondary restoration; the restored test sample data is only data which is not preprocessed.
Then, based on the test sample data after the secondary reduction, predicting the first BP neural network training model and the second BP neural network training model respectively, and outputting a prediction result value, including:
predicting the first BP neural network training model based on the test sample data after the secondary reduction, and outputting a first prediction result value;
predicting the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a second prediction result value;
respectively processing the first prediction result value and the second prediction result value by using an inverse normalization processing function and a whitening processing function to obtain a first prediction value and a second prediction value;
and fitting the first predicted value and the second predicted value by using a linear regression function to obtain a fitting result value, namely a predicted result value.
The output unit 28 is configured to determine whether an error of the predicted result value is within a preset error range, and output the energy consumption prediction model of the linear correlation impact factor and the energy consumption prediction model of the nonlinear correlation impact factor if the error of the predicted result value is within the preset error range.
Specifically, the output unit 28 determines whether the error of the prediction result value is within a preset error range based on the actual building energy consumption value, and outputs the fitting result value as the energy consumption influence prediction value if the error of the prediction result value is within the preset error range or the accuracy of the prediction result value reaches at least 90%. And simultaneously outputting the energy consumption prediction model of the linear correlation influence factor, the energy consumption prediction model of the nonlinear correlation influence factor, the first main influence factor and the second main influence factor.
If the error of the predicted result value is not within the preset error range, resetting the learning precision, the learning times, the input weight of the hidden layer and the output weight of the hidden layer of the first BP neural network training model and the second BP neural network training model by taking the actual building energy consumption value as the reference to form a new first BP neural network training model and a new second BP neural network training model, training and predicting the first BP neural network training model and the second BP neural network training model again according to the same method to obtain a new predicted result value, outputting the final energy consumption prediction model of the linear correlation influence factor and the final energy consumption prediction model of the nonlinear correlation influence factor until the error of the predicted result value is within the preset error range, and outputting a first main influence factor and a second main influence factor.
The method and the device for constructing the building energy consumption prediction model provided by the embodiment of the invention have the beneficial effects that at least:
the embodiment of the invention provides a method and a device for a building energy consumption prediction model, wherein the method comprises the following steps: acquiring prior data, and acquiring an energy consumption influence factor set based on the prior data; classifying the energy consumption influence factors, and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set; respectively constructing corresponding Bayesian network models for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner; respectively determining a first main influence factor, a first non-main influence factor and a second main influence factor in the linear correlation influence factor set and a second non-main influence factor in the non-linear correlation influence factor set based on the corresponding Bayesian network model; constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor; acquiring training sample data, and training the first BP neural network training model and the second BP neural network training model respectively based on the training sample data; respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data, and outputting prediction result values; judging whether the error of the predicted result value is within a preset error range, and if the error of the predicted result value is within the preset error range, outputting an energy consumption prediction model of the linear correlation influence factor and an energy consumption prediction model of the nonlinear correlation influence factor; therefore, the Bayesian network model can be used for acquiring main key factors from multiple building factor influence events, namely main influence factors of building energy consumption; and continuously training and predicting by using the BP neural network training model to obtain an energy consumption prediction model approximate to the fitting degree of real data, so that the prediction precision of the building energy consumption prediction model can be improved, and the trend of building energy consumption can be accurately predicted.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. that are within the spirit and principle of the present invention should be included in the present invention.

Claims (10)

1. A method of constructing a building energy consumption prediction model, the method comprising:
acquiring building prior data, and acquiring an energy consumption influence factor set based on the prior data;
classifying the energy consumption influence factors, and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set;
respectively constructing corresponding Bayesian network models for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner;
respectively determining a first main influence factor, a first non-main influence factor and a second main influence factor in the linear correlation influence factor set and a second non-main influence factor in the non-linear correlation influence factor set based on the corresponding Bayesian network model;
constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor;
acquiring training sample data, and training the first BP neural network training model and the second BP neural network training model respectively based on the training sample data;
respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data, and outputting prediction result values;
and judging whether the error of the prediction result value is within a preset error range, and if so, outputting the energy consumption prediction model of the linear correlation influence factor and the energy consumption prediction model of the nonlinear correlation influence factor.
2. The method of claim 1, wherein the constructing a first BP neural network training model based on the preprocessed first major influencing factor, the first non-major influencing factor; before constructing a second BP neural network training model based on the preprocessed second main influence factor and the preprocessed second non-main influence factor, the method comprises the following steps:
and performing ashing pretreatment and normalization pretreatment on the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor.
3. The method of claim 1, wherein the determining a first dominant impact factor, a first non-dominant impact factor, a second dominant impact factor, and a second non-dominant impact factor, respectively, in the set of linear correlation impact factors based on the respective bayesian network models comprises:
respectively calculating probability distribution of each node in the directed acyclic graph in corresponding Bayes network models, and respectively acquiring relative weight of each node based on the probability distribution; each node corresponds to each influence factor;
and respectively determining the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor according to the relative weight of each influence factor.
4. The method of claim 1, wherein the variables of the first and second BP neural network training models comprise:
the number of nodes of an input layer is n, the number of nodes of an implicit layer is p, and the number of nodes of an output layer is q;
the learning precision is epsilon, and the maximum learning frequency is M;
the hidden layer input weight is wihThe output weight of the hidden layer is who(ii) a The threshold value of each node of the hidden layer is bhThe threshold value of each node of the output layer is bo
The activation function is
The error function isThe yooIs any one of the output vectors of the output layer, doAny one vector in the expected output vectors;
the input vector is x ═ x1,x2,…,xn);
The expected output vector is d ═ d (d)1,d2,…,dq);
The input vector of the hidden layer is hi ═ hi (hi)1,hi2,…,hip);
The output vector of the hidden layer is ho ═ ho (ho)1,ho2,…,hop);
The input vector of the output layer is yi ═ y (yi)1,yi2,…,yiq);
The output vector of the output layer is yo ═ yo (yo)1,yo2,…,yoq)。
5. The method of claim 2, wherein said obtaining training sample data comprises:
segmenting the data sequences of the first main influence factor, the first non-main influence factor, the second main influence factor and the second non-main influence factor after the normalization preprocessing respectively to form n data segments with the length of m +1 and coincidence; each data segment is a training sample data.
6. The method of claim 4, wherein the training the first BP neural network training model and the second BP neural network training model based on the training sample data, respectively, comprises:
respectively determining partial derivatives delta of the error function to each node of the output layers of the first BP neural network training model and the second BP neural network training modelo
Respectively determining partial derivatives-delta of the error function to each node of hidden layers of the first BP neural network training model and the second BP neural network training modelh
Respectively using partial derivatives delta of each node of the output layeroAnd hohCorrecting the output weight of the hidden layer to be who
Respectively using partial derivatives of nodes of the hidden layer-δhAnd xiCorrecting the input weight w of the hidden layerih(ii) a Said xiIs any node in the input layer of the corresponding BP neural network model.
7. The method of claim 1, wherein the predicting the trained first and second BP neural network training models based on preset test sample data and outputting a prediction result value comprises:
utilizing a normalization function based on the test sample dataReversely analyzing, and outputting the test sample data y after the primary reduction; said xmaxFor the maximum value in the test sample data sequence, xminIs the minimum value in the test sample data sequence;
using ashing reduction functionSecondarily restoring the test sample data subjected to the primary restoration, and outputting the test sample data subjected to the secondary restoration;
and respectively predicting the first BP neural network training model and the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a prediction result value.
8. The method of claim 7, wherein the predicting the first and second BP neural network training models based on the test sample data after the second restoring and outputting a prediction result value comprises:
predicting the first BP neural network training model based on the test sample data after the secondary reduction, and outputting a first prediction result value;
predicting the second BP neural network training model based on the test sample data after the secondary reduction, and outputting a second prediction result value;
respectively processing the first prediction result value and the second prediction result value by using an inverse normalization processing function and a whitening processing function to obtain a first prediction value and a second prediction value;
and fitting the first predicted value and the second predicted value by using a linear regression function to obtain the predicted result value.
9. An apparatus for constructing a model for predicting energy consumption of a building, the apparatus comprising:
the acquisition unit is used for acquiring prior building data and acquiring an energy consumption influence factor set based on the prior data;
the classification unit is used for classifying the energy consumption influence factors and dividing the energy consumption influence factors into a linear correlation influence factor set and a nonlinear correlation influence factor set;
a first constructing unit, configured to construct a corresponding bayesian network model for the set of linear correlation influence factors and the set of nonlinear correlation influence factors in a combined manner, respectively;
a determining unit, configured to determine, based on the respective bayesian network models, a first primary influence factor, a first non-primary influence factor, a second primary influence factor, and a second non-primary influence factor in the set of non-linear correlation influence factors, respectively;
the second construction unit is used for constructing a first BP neural network training model based on the first main influence factor and the first non-main influence factor; constructing a second BP neural network training model based on the preprocessed second main influence factor and the second non-main influence factor;
the training unit is used for acquiring training sample data and respectively training the first BP neural network training model and the second BP neural network training model based on the training sample data;
the prediction unit is used for respectively carrying out prediction inspection on the trained first BP neural network training model and second BP neural network training model based on preset test sample data and outputting prediction result values;
and the output unit is used for judging whether the error of the prediction result value is within a preset error range or not, and outputting the energy consumption prediction model of the linear correlation influence factor and the energy consumption prediction model of the nonlinear correlation influence factor if the error of the prediction result value is within the preset error range.
10. The apparatus of claim 9, wherein the apparatus further comprises: the preprocessing unit is used for constructing a first BP neural network training model based on the preprocessed first main influence factor and the preprocessed first non-main influence factor in the second constructing unit; before a second BP neural network training model is constructed based on the second main influence factors and the second non-main influence factors after pretreatment, ashing pretreatment and normalization pretreatment are carried out on the first main influence factors, the first non-main influence factors, the second main influence factors and the second non-main influence factors.
CN201710806517.3A 2017-09-08 2017-09-08 Method and device for constructing building energy consumption prediction model Active CN107590565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710806517.3A CN107590565B (en) 2017-09-08 2017-09-08 Method and device for constructing building energy consumption prediction model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710806517.3A CN107590565B (en) 2017-09-08 2017-09-08 Method and device for constructing building energy consumption prediction model

Publications (2)

Publication Number Publication Date
CN107590565A true CN107590565A (en) 2018-01-16
CN107590565B CN107590565B (en) 2021-01-05

Family

ID=61051121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710806517.3A Active CN107590565B (en) 2017-09-08 2017-09-08 Method and device for constructing building energy consumption prediction model

Country Status (1)

Country Link
CN (1) CN107590565B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615071A (en) * 2018-05-10 2018-10-02 阿里巴巴集团控股有限公司 The method and device of model measurement
CN108764568A (en) * 2018-05-28 2018-11-06 哈尔滨工业大学 A kind of data prediction model tuning method and device based on LSTM networks
CN109063903A (en) * 2018-07-19 2018-12-21 山东建筑大学 A kind of building energy consumption prediction technique and system based on deeply study
CN109325631A (en) * 2018-10-15 2019-02-12 华中科技大学 Electric car charging load forecasting method and system based on data mining
CN109685252A (en) * 2018-11-30 2019-04-26 西安工程大学 Building energy consumption prediction technique based on Recognition with Recurrent Neural Network and multi-task learning model
CN109726936A (en) * 2019-01-24 2019-05-07 辽宁工业大学 A kind of monitoring method rectified a deviation for tilting ancient masonry pagoda
CN110032780A (en) * 2019-02-01 2019-07-19 浙江中控软件技术有限公司 Commercial plant energy consumption benchmark value calculating method and system based on machine learning
CN111062876A (en) * 2018-10-17 2020-04-24 北京地平线机器人技术研发有限公司 Method and device for correcting model training and image correction and electronic equipment
CN111160598A (en) * 2019-11-13 2020-05-15 浙江中控技术股份有限公司 Energy prediction and energy consumption control method and system based on dynamic energy consumption benchmark
CN111179108A (en) * 2018-11-12 2020-05-19 珠海格力电器股份有限公司 Method and device for predicting power consumption
CN111221880A (en) * 2020-04-23 2020-06-02 北京瑞莱智慧科技有限公司 Feature combination method, device, medium, and electronic apparatus
CN111859500A (en) * 2020-06-24 2020-10-30 广州大学 Method and device for predicting bridge deck elevation of rigid frame bridge
CN112183166A (en) * 2019-07-04 2021-01-05 北京地平线机器人技术研发有限公司 Method and device for determining training sample and electronic equipment
CN112230991A (en) * 2020-10-26 2021-01-15 重庆博迪盛软件工程有限公司 Software portability evaluation method based on BP neural network
CN112462708A (en) * 2020-11-19 2021-03-09 南京河海南自水电自动化有限公司 Remote diagnosis and optimized scheduling method and system for pump station
CN113552855A (en) * 2021-07-23 2021-10-26 重庆英科铸数网络科技有限公司 Industrial equipment dynamic threshold setting method and device, electronic equipment and storage medium
CN116204566A (en) * 2023-04-28 2023-06-02 深圳市欣冠精密技术有限公司 Digital factory monitoring big data processing system
CN117077854A (en) * 2023-08-15 2023-11-17 广州视声智能科技有限公司 Building energy consumption monitoring method and system based on sensor network

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104269849A (en) * 2014-10-17 2015-01-07 国家电网公司 Energy managing method and system based on building photovoltaic micro-grid
CN104331737A (en) * 2014-11-21 2015-02-04 国家电网公司 Office building load prediction method based on particle swarm neural network
CN104597842A (en) * 2015-02-02 2015-05-06 武汉理工大学 BP neutral network heavy machine tool thermal error modeling method optimized through genetic algorithm
CN104765916A (en) * 2015-03-31 2015-07-08 西南交通大学 Dynamics performance parameter optimizing method of high-speed train
CN104834808A (en) * 2015-04-07 2015-08-12 青岛科技大学 Back propagation (BP) neural network based method for predicting service life of rubber absorber
CN105373830A (en) * 2015-12-11 2016-03-02 中国科学院上海高等研究院 Prediction method and system for error back propagation neural network and server
CN105631539A (en) * 2015-12-25 2016-06-01 上海建坤信息技术有限责任公司 Intelligent building energy consumption prediction method based on support vector machine
CN106161138A (en) * 2016-06-17 2016-11-23 贵州电网有限责任公司贵阳供电局 A kind of intelligence automatic gauge method and device
CN106874581A (en) * 2016-12-30 2017-06-20 浙江大学 A kind of energy consumption of air conditioning system in buildings Forecasting Methodology based on BP neural network model
CN106951611A (en) * 2017-03-07 2017-07-14 哈尔滨工业大学 A kind of severe cold area energy-saving design in construction optimization method based on user's behavior
CN106991504A (en) * 2017-05-09 2017-07-28 南京工业大学 Building energy consumption prediction method and system based on subentry measurement time sequence and building

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104269849A (en) * 2014-10-17 2015-01-07 国家电网公司 Energy managing method and system based on building photovoltaic micro-grid
CN104331737A (en) * 2014-11-21 2015-02-04 国家电网公司 Office building load prediction method based on particle swarm neural network
CN104597842A (en) * 2015-02-02 2015-05-06 武汉理工大学 BP neutral network heavy machine tool thermal error modeling method optimized through genetic algorithm
CN104765916A (en) * 2015-03-31 2015-07-08 西南交通大学 Dynamics performance parameter optimizing method of high-speed train
CN104834808A (en) * 2015-04-07 2015-08-12 青岛科技大学 Back propagation (BP) neural network based method for predicting service life of rubber absorber
CN105373830A (en) * 2015-12-11 2016-03-02 中国科学院上海高等研究院 Prediction method and system for error back propagation neural network and server
CN105631539A (en) * 2015-12-25 2016-06-01 上海建坤信息技术有限责任公司 Intelligent building energy consumption prediction method based on support vector machine
CN106161138A (en) * 2016-06-17 2016-11-23 贵州电网有限责任公司贵阳供电局 A kind of intelligence automatic gauge method and device
CN106874581A (en) * 2016-12-30 2017-06-20 浙江大学 A kind of energy consumption of air conditioning system in buildings Forecasting Methodology based on BP neural network model
CN106951611A (en) * 2017-03-07 2017-07-14 哈尔滨工业大学 A kind of severe cold area energy-saving design in construction optimization method based on user's behavior
CN106991504A (en) * 2017-05-09 2017-07-28 南京工业大学 Building energy consumption prediction method and system based on subentry measurement time sequence and building

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
史丽荣: "日光温室环境建模及控制策略的研究", 《中国优秀硕士学位论文全文数据库 农业科技辑》 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019214309A1 (en) * 2018-05-10 2019-11-14 阿里巴巴集团控股有限公司 Model test method and device
CN112232476B (en) * 2018-05-10 2024-04-16 创新先进技术有限公司 Method and device for updating test sample set
US11176418B2 (en) 2018-05-10 2021-11-16 Advanced New Technologies Co., Ltd. Model test methods and apparatuses
CN112232476A (en) * 2018-05-10 2021-01-15 创新先进技术有限公司 Method and device for updating test sample set
CN108615071B (en) * 2018-05-10 2020-11-24 创新先进技术有限公司 Model testing method and device
CN108615071A (en) * 2018-05-10 2018-10-02 阿里巴巴集团控股有限公司 The method and device of model measurement
TWI698808B (en) * 2018-05-10 2020-07-11 香港商阿里巴巴集團服務有限公司 Model testing method and device
CN108764568B (en) * 2018-05-28 2020-10-23 哈尔滨工业大学 Data prediction model tuning method and device based on LSTM network
CN108764568A (en) * 2018-05-28 2018-11-06 哈尔滨工业大学 A kind of data prediction model tuning method and device based on LSTM networks
CN109063903B (en) * 2018-07-19 2021-04-09 山东建筑大学 Building energy consumption prediction method and system based on deep reinforcement learning
CN109063903A (en) * 2018-07-19 2018-12-21 山东建筑大学 A kind of building energy consumption prediction technique and system based on deeply study
CN109325631A (en) * 2018-10-15 2019-02-12 华中科技大学 Electric car charging load forecasting method and system based on data mining
CN111062876A (en) * 2018-10-17 2020-04-24 北京地平线机器人技术研发有限公司 Method and device for correcting model training and image correction and electronic equipment
CN111062876B (en) * 2018-10-17 2023-08-08 北京地平线机器人技术研发有限公司 Method and device for correcting model training and image correction and electronic equipment
CN111179108A (en) * 2018-11-12 2020-05-19 珠海格力电器股份有限公司 Method and device for predicting power consumption
CN109685252B (en) * 2018-11-30 2023-04-07 西安工程大学 Building energy consumption prediction method based on cyclic neural network and multi-task learning model
CN109685252A (en) * 2018-11-30 2019-04-26 西安工程大学 Building energy consumption prediction technique based on Recognition with Recurrent Neural Network and multi-task learning model
CN109726936A (en) * 2019-01-24 2019-05-07 辽宁工业大学 A kind of monitoring method rectified a deviation for tilting ancient masonry pagoda
CN109726936B (en) * 2019-01-24 2020-06-30 辽宁工业大学 Monitoring method for deviation correction of inclined masonry ancient tower
CN110032780A (en) * 2019-02-01 2019-07-19 浙江中控软件技术有限公司 Commercial plant energy consumption benchmark value calculating method and system based on machine learning
CN112183166A (en) * 2019-07-04 2021-01-05 北京地平线机器人技术研发有限公司 Method and device for determining training sample and electronic equipment
CN111160598A (en) * 2019-11-13 2020-05-15 浙江中控技术股份有限公司 Energy prediction and energy consumption control method and system based on dynamic energy consumption benchmark
CN111221880A (en) * 2020-04-23 2020-06-02 北京瑞莱智慧科技有限公司 Feature combination method, device, medium, and electronic apparatus
CN111859500B (en) * 2020-06-24 2023-10-10 广州大学 Prediction method and device for bridge deck elevation of rigid frame bridge
CN111859500A (en) * 2020-06-24 2020-10-30 广州大学 Method and device for predicting bridge deck elevation of rigid frame bridge
CN112230991A (en) * 2020-10-26 2021-01-15 重庆博迪盛软件工程有限公司 Software portability evaluation method based on BP neural network
CN112462708A (en) * 2020-11-19 2021-03-09 南京河海南自水电自动化有限公司 Remote diagnosis and optimized scheduling method and system for pump station
CN113552855A (en) * 2021-07-23 2021-10-26 重庆英科铸数网络科技有限公司 Industrial equipment dynamic threshold setting method and device, electronic equipment and storage medium
CN116204566A (en) * 2023-04-28 2023-06-02 深圳市欣冠精密技术有限公司 Digital factory monitoring big data processing system
CN117077854A (en) * 2023-08-15 2023-11-17 广州视声智能科技有限公司 Building energy consumption monitoring method and system based on sensor network
CN117077854B (en) * 2023-08-15 2024-04-16 广州视声智能科技有限公司 Building energy consumption monitoring method and system based on sensor network

Also Published As

Publication number Publication date
CN107590565B (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN107590565B (en) Method and device for constructing building energy consumption prediction model
Li et al. Deep learning for high-dimensional reliability analysis
CN108900346B (en) Wireless network flow prediction method based on LSTM network
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN101477172B (en) Analogue circuit fault diagnosis method based on neural network
CN110245801A (en) A kind of Methods of electric load forecasting and system based on combination mining model
CN111785014B (en) Road network traffic data restoration method based on DTW-RGCN
CN109101584B (en) Sentence classification improvement method combining deep learning and mathematical analysis
CN107977710A (en) Electricity consumption abnormal data detection method and device
CN109815855B (en) Electronic equipment automatic test method and system based on machine learning
CN103927550A (en) Handwritten number identifying method and system
CN115587666A (en) Load prediction method and system based on seasonal trend decomposition and hybrid neural network
CN112580588A (en) Intelligent flutter signal identification method based on empirical mode decomposition
CN112101487A (en) Compression method and device for fine-grained recognition model
CN107688863A (en) The short-term wind speed high accuracy combination forecasting method that adaptive iteration is strengthened
CN116819423A (en) Method and system for detecting abnormal running state of gateway electric energy metering device
CN113988415A (en) Medium-and-long-term power load prediction method
KR20190134308A (en) Data augmentation method and apparatus using convolution neural network
CN113688770A (en) Long-term wind pressure missing data completion method and device for high-rise building
CN112651500A (en) Method for generating quantization model and terminal
CN112232570A (en) Forward active total electric quantity prediction method and device and readable storage medium
CN117310277A (en) Electric energy metering method, system, equipment and medium for off-board charger of electric automobile
CN110909948A (en) Soil pollution prediction method and system
CN111160419B (en) Deep learning-based electronic transformer data classification prediction method and device
Dhulipala et al. Bayesian Inference with Latent Hamiltonian Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant