Nothing Special   »   [go: up one dir, main page]

CN109993281A - A kind of causality method for digging based on deep learning - Google Patents

A kind of causality method for digging based on deep learning Download PDF

Info

Publication number
CN109993281A
CN109993281A CN201910242406.3A CN201910242406A CN109993281A CN 109993281 A CN109993281 A CN 109993281A CN 201910242406 A CN201910242406 A CN 201910242406A CN 109993281 A CN109993281 A CN 109993281A
Authority
CN
China
Prior art keywords
data
model
time series
alternative features
causality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910242406.3A
Other languages
Chinese (zh)
Inventor
刘博�
贺玺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201910242406.3A priority Critical patent/CN109993281A/en
Publication of CN109993281A publication Critical patent/CN109993281A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention discloses a kind of causality method for digging based on deep learning, data are pre-processed first with technologies such as missing data supplement, data normalization, one-hot codings, it is then based on the single argument time series forecasting that Keras deep learning frame carries out target signature using LSTM, adjustment model structure and a series of hyper parameters obtain optimal models, record R of the model on test set2Then score is predicted all alternative features using the model, its R on test set is obtained2Score subtracts each other the Granger causality score as alternative features and target signature with the two scores, so far then show that the number of a quantification is used to describe the Granger causality between alternative features and target signature.This method is suitable in the analysis of Influential Factors problem of other times sequence.In conclusion the Granger causality method for digging proposed by the present invention based on deep learning has towards mass data, excavates to more depth, application field widely advantage.

Description

A kind of causality method for digging based on deep learning
Technical field
The invention belongs to data mining technology fields, more particularly to are based on depth learning technology from Multivariate Time Series Granger causality is excavated in data.
Background technique
Time series data is the data point set being observed using constant as time interval, and key property has Two, one is to have correlation between observation dependent on the time, secondly for other than having lifting trend, most times Sequence also has seasonal trend, i.e., has specific variation in specific time window.As data carrying cost drops significantly Low, the plenty of time sequence data generated in actual life is also completely recorded, and this data are in finance, commodity valence The fields such as lattice, traffic are generally existing.Such as in air quality field, atmospheric visibility decline in recent years, air quality are deteriorated, compared with More cities are often accompanied by haze weather, all produce biggish negative effect to people's lives and work, therefore air quality is asked Topic receives government and the highest attention of people.For the Glan of visibility and meteorologic factor (sendible temperature, temperature, air pressure etc.) Outstanding causality excavates hiding relationship and some boisterous occurrence laws that can reveal that between these features, thus Certain theories integration can be provided for air contaminant treatment.Therefore the Granger causality excavated between multivariable is very With practical significance.
In causality excavation applications, Granger Causality is the common method of causality in hunting time sequence data. As shown in Figure 1, it is assumed that have a time series X.It is the sampled point { x by different time1, x2, x3..., xnCollectively form , now using the future of the past prediction X of X, for example use x1~xn-jPredict xn-j+1~xn, a mistake is generated during prediction Poor δ1.It is the same with X, it is assumed that having time sequence Y, it is shaped like X, by { y1, y2, y3..., ynCollectively constitute, recycle X and Y common Past prediction X future, such as with { x1~xn-j|y1~yn-jRemove prediction xn-j+1~xn, the process of prediction generate one it is new Error delta2.If δ1Greater than δ2, that is to say, that the associated prediction error of X and Y is less than the prediction error of X itself, in such case Under, claim Y and X to have Granger causality.
Current existing multivariate time series Causality Analysis is concentrated mainly on qualitative Granger Causality between variable Relational checking, some researchs based on Granger causality also only come fitting data and provide knot on the basis of linear regression Fruit, this method can not accomplish nonlinear data complicated in actual life the prediction of high-accuracy, have very big office It is sex-limited.With the development of artificial intelligence technology, the application of deep learning is also more universal, and shot and long term remembers (Long Short- Term Memory, LSTM) it is a kind of time recurrent neural network (RNN), it is suitably used among processing and predicted time sequence Every the critical event very long with delay, proposed earliest by Hochreiter & Schmidhuber in 1997, by numerous Experts and scholars refine and promote, now because performance has been widely used in remarkably in the tasks such as time series forecasting.
The purpose of design of LSTM is very clear: avoiding long-rang dependence problem.For LSTM, long-time " remembeing " information It is a kind of behavior of default, rather than is difficult to the thing learnt.The characteristics of LSTM is exactly that each layer is added to other than RNN structure Valve node.Valve has 3 classes: forgeing valve (forget gate), inputs valve (input gate) and output valve (output gate).These valves can open or close, for by the memory state (shape of network before of judgment models network State) in the result of this layer output whether reach threshold value to be added in the calculating of this current layer.Valve node utilizes sigmoid Function is calculated the memory state of network as input;By valve output and current layer if output result reaches threshold value The output result is forgotten by calculated result multiplication as next layer of input if not reaching threshold value.Each layer includes The weight of valve node can all update in the training process of model backpropagation each time.When valve open when, before mould The training result of type will be associated with current model and calculate, and the just no longer shadow of the calculated result before when valve is closed Ring current calculating.Therefore, influence of the early stage sequence to final result can be realized by the switch of control valve.
Summary of the invention
The technical problem to be solved by the present invention is to provide a kind of Granger causality excavation side based on deep learning Method pre-processes data first with technologies such as missing data supplement, data normalization, one-hot codings, is then based on Keras deep learning frame carries out the single argument time series forecasting of target signature using LSTM, adjusts model structure and a series of super Parameter obtains optimal models, records R of the model on test set2Score, then using the model for all alternative features It is predicted, obtains its R on test set2Score subtracts each other the lattice as alternative features and target signature with the two scores Blue outstanding person's causality score, so far then it can be concluded that the number of a quantification is for describing between alternative features and target signature Granger causality.
Multidimensional time-series in the larger situation of data-oriented magnitude of the present invention are proposed and are excavated based on depth learning technology Wherein between target signature and other alternative features (one or more dimensions) Granger causality method.Data are carried out first Pretreatment, observes data type, distribution and the deletion condition of its each feature, category feature is needed to carry out One-hot Coding or LabelEncoder are standardized, and for the missing values in data, can attempt to utilize mode or median To carry out completion.Keras deep learning frame is then based on to utilize LSTM to carry out the single argument time series forecasting of target signature, is adjusted Integral mould structure and a series of hyper parameters obtain optimal models, record R of the model on test set2Score, then utilizing should Model predicts all alternative features, obtains its R on test set2Score is subtracted each other with the two scores as standby The Granger causality score of feature and target signature is selected, so far then it can be concluded that the number of a quantification is standby for describing Select the Granger causality between feature and target signature.
To achieve the goals above, the invention adopts the following technical scheme: in order to construct a LSTM model to instruction Practice collection to be trained, first have to pre-process data, including converts supervised learning problem, steady for time series problem Time series, data normalization etc..It, can be by using the observation from the last one moment for a time series problem Feature X as input and the observation at current time realize conversion as output Y, since it is desired that conversion is one group of time Sequence data has clearly one-to-one input/output relation so can not be combined into and think real supervised learning like that, especially In data set when most starting or is last, two positions always have a position that can not be focused to find out corresponding relationship in training, in order to Such problems is solved, input feature vector can be set to 0 when most starting, and its corresponding output is exactly the first of time series A element.For jiggly time series, stationary sequence can be converted it by the method that first-order difference is handled.Its It is secondary, be standardized before data input can very effective promotion convergence rate and effect, especially if when activation primitive Sigmoid or tanh, the maximum section of gradient are near 0, when input value is very big or very little, sigmoid or With regard to substantially flat, that is, when carry out gradient decline and optimize, gradient can tend to 0, and cause for the variation of person tanh Optimal speed is very slow.And tanh function when the default activation function of LSTM, its output area is in [- 1,1], while this is the time The preferred range of sequence data.Therefore MinMaxScaler class can be used data set is transformed into the range of [- 1,1].It is complete LSTM model can be then constructed at the pretreatment of data, ready data set is decomposed into training set and test set first, so Training set and test set are decomposed into afterwards and output and input variable, and by input remodeling at 3D format expected from LSTM, i.e. [sample Example, time step, feature].Then LSTM model is defined and is fitted, 50 neurons are defined in the first hidden layer, in output layer The middle target signature for defining a neuron for prediction, input shape will be a time steps with all alternative features, The efficient Adam version of mean absolute error (MAE) loss function and stochastic gradient descent is used simultaneously, which will be suitable for 50 t raining periods that batch size is 72 come finally by validation_data () parameter is arranged in fit () function Model training and test loss during track training, in end of run, training and prediction loss are all drawn and come out, and such as scheme Shown in 2.Here on test set, in order to preferably portray Granger causality power, R is used2To measure the effect of prediction Quality, formula are as follows:
WhereinIt is the time series predicted, yiThe true value for the time series observed is represented,When indicating this Between sequence mean value, therefore, R2It can be construed to change the explained variance of prediction model, when the value more approaches 1, illustrate model Prediction effect is better, otherwise then illustrates that the predictive ability of the model is negative if negative value, uses R2It can more formally define Granger causality.By only being predicted twice with target signature and using all alternative features, predicted twice to calculate R on test set2Difference, claim its Granger causality score, to quantify between alternative features and target signature Granger causality is strong and weak.
By being tested based on above-mentioned technical description, the causality of last available alternative features and target signature Score, the value the big, illustrates that the Granger causality of alternative features and target signature is stronger, on the contrary then illustrate that relationship is weaker Even without Granger causality.
A kind of Granger causality method for digging based on deep learning the following steps are included:
Step 1 obtains time related sequence data, and cleans to these data.
Step 2, for shortage of data, it is unsmooth situations such as the data after cleaning are pre-processed.
Step 3 carries out model buildings for pretreated data, constructs training set and test set, parameter optimization obtain pre- Survey the optimal solution of model.
Step 4 predicts the only data of target signature using trained model and containing the data of alternative features respectively, The Granger causality score of alternative features and target signature is obtained using training result twice.
Preferably, step 2 specifically includes the following steps:
Step 2.1, for the missing values in data, carry out completion using mode or median;
Step 2.2 need to carry out One-hot coding for category feature or LabelEncoder is standardized;
Step 2.3 is directed to jiggly time series, converts it into steady sequence by the method that first-order difference is handled Column;
Time series problem is converted supervised learning problem by step 2.4;
Preferably, step 3 specifically includes the following steps:
Step 3.1, according to data volume size cutting training set and test set;
Step 3.2 defines LSTM model, and 50 neurons are defined in the first hidden layer, and a mind is defined in output layer Target signature through member for prediction, input shape will be a time steps with all alternative features;
Step 3.3 defines loss function, uses the height of mean absolute error (MAE) loss function and stochastic gradient descent Adam version is imitated, which will be suitable for 50 t raining periods of the batch size for 72, then trains LSTM model, constantly adjusts Hyper parameter obtains model optimal solution;
Preferably, step 4 specifically includes the following steps:
Step 4.1, using above-mentioned model, respectively when containing only target signature and two kinds comprising all alternative features Predicted target values;
Step 4.2, the R predicted twice2Value calculates the R predicted twice2The difference of value, can be obtained alternative features and mesh Mark the Granger causality score of feature.
Opposite with the prior art, the present invention has following clear superiority:
The present invention is based on depth learning technologies to excavate the Granger causality between time series, first with missing number Data are pre-processed according to technologies such as supplement, data normalization, one-hot codings, are then based on Keras deep learning frame benefit The single argument time series forecasting of target signature is carried out with LSTM, adjusts model structure and a series of hyper parameters obtain optimal models, remember Record R of the model on test set2Then score is predicted all alternative features using the model, obtain it and surveying R on examination collection2Score subtracts each other the Granger causality score as alternative features and target signature with the two scores, until This is then it can be concluded that the number of a quantification is used to describe the Granger causality between alternative features and target signature.Base In deep learning prediction model can more accurate fit time sequence data, the accuracy rate of prediction compared to line linear regression and Conventional machines learning method is higher, therefore the present invention can excavate the pass of the Granger Causality between time series to a deeper level System, and the numerical value of a quantification is provided to measure the power of Granger causality.In addition, when this method is suitable for other Between sequence analysis of Influential Factors problem on.In conclusion the Granger causality proposed in this paper based on deep learning is dug Pick method has towards mass data, excavates to more depth, application field widely advantage.
Detailed description of the invention:
Fig. 1 is the general flow figure of Granger causality test;
Fig. 2 is the loss variation diagram of model training process of the present invention;
Fig. 3 is that the present invention relates to the flow charts of method;
Specific embodiment
Yi Xiajiehejutishishili,Bing Canzhaofutu,Dui Benfamingjinyibuxiangxishuoming.
Hardware device used in the present invention has PC machine 1,1080 video card 2;
As shown in figure 3, the present invention provides a kind of Granger causality method for digging based on deep learning, specifically include Following steps:
Step 1, time related sequence data are obtained, and these data are cleaned.
Step 2, for shortage of data, it is unsmooth situations such as data are pre-processed.
Step 2.1, for the missing values in data, completion is carried out using mode or median.
Step 2.2, One-hot coding need to be carried out for category feature or LabelEncoder is standardized.
Step 2.3, for jiggly time series, steady sequence is converted it by the method that first-order difference is handled Column.
Step 2.4, supervised learning problem is converted by time series problem.
Step 3, model buildings, construct training set and test set, parameter optimization obtain the optimal solution of prediction model.
Step 3.1, according to data volume size cutting training set and test set.
Step 3.2, LSTM model is defined, 50 neurons are defined in the first hidden layer, a mind is defined in output layer Target signature through member for prediction, input shape will be a time steps with all alternative features, while using average The efficient Adam version of absolute error (MAE) loss function and stochastic gradient descent, it is 72 which, which will be suitable for batch size, 50 t raining periods.
Step 3.3, loss function is defined, training LSTM model, constantly adjustment hyper parameter obtain model optimal solution.
Step 4, the only data of target signature are predicted respectively using trained model and containing the data of alternative features, The Granger causality score of alternative features and target signature is obtained using training result twice;
Step 4.1, using above-mentioned model, respectively when containing only target signature and two kinds comprising all alternative features Predicted target values.
Step 4.2, the R predicted twice2Value, calculates their difference, alternative features and target signature can be obtained Granger causality score.
Above embodiments are only exemplary embodiment of the present invention, are not used in the limitation present invention, protection scope of the present invention It is defined by the claims.Those skilled in the art can within the spirit and scope of the present invention make respectively the present invention Kind modification or equivalent replacement, this modification or equivalent replacement also should be regarded as being within the scope of the present invention.

Claims (5)

1. a kind of causality method for digging based on deep learning, it is characterised in that: the following steps are included:
Step 1 obtains time related sequence data, and cleans to these data;
Step 2 pre-processes the data after cleaning for shortage of data, unsmooth situation;
Step 3 carries out model buildings for pretreated data, constructs training set and test set, and parameter optimization obtains prediction mould The optimal solution of type;
Step 4 is predicted the only data of target signature respectively using trained model and containing the data of alternative features, utilized Training result obtains the Granger causality score of alternative features and target signature twice.
2. a kind of causality method for digging based on deep learning according to claim 1, it is characterised in that: step 2 Specifically include following steps,
Step 2.1, for the missing values in data, carry out completion using mode or median;
Step 2.2 need to carry out One-hot coding for category feature or LabelEncoder is standardized;
Step 2.3 is directed to jiggly time series, converts it into stationary sequence by the method that first-order difference is handled;
Time series problem is converted supervised learning problem by step 2.4.
3. a kind of causality method for digging based on deep learning according to claim 1, it is characterised in that: step 3 Specifically includes the following steps:
Step 3.1, according to data volume size cutting training set and test set;
Step 3.2 defines LSTM model, and 50 neurons are defined in the first hidden layer, and a neuron is defined in output layer For the target signature of prediction, inputting shape will be a time step with all alternative features;
Step 3.3 defines loss function, uses the efficient of mean absolute error (MAE) loss function and stochastic gradient descent Adam version, the model will be suitable for 50 t raining periods of the batch size for 72, then train LSTM model, and constantly adjustment is super Parameter obtains model optimal solution.
4. a kind of causality method for digging based on deep learning according to claim 1, it is characterised in that: step 4 Specifically includes the following steps:
Step 4.1, using above-mentioned model, predicted respectively when containing only target signature and two kinds comprising all alternative features Target value;
Step 4.2, the R predicted twice2Value calculates the R predicted twice2The difference of value, can be obtained alternative features and target is special The Granger causality score of sign.
5. a kind of causality method for digging based on deep learning according to claim 1, it is characterised in that:
First have to pre-process data, including by time series problem be converted into supervised learning problem, stationary time series, Data normalization;It, can be as input by using the observation from the last one moment for a time series problem The observation at feature X and current time realizes conversion as output Y, since it is desired that conversion is one group of time series data, There is clearly one-to-one input/output relation so can not be combined into and think real supervised learning like that, especially in data set When most starting or is last, two positions always have a position that can not be focused to find out corresponding relationship in training, such in order to solve Input feature vector can be set to 0 when most starting by problem, and its corresponding output is exactly first element of time series;Needle To jiggly time series, stationary sequence is converted it by the method that first-order difference is handled;Secondly, before data input Be standardized can very effective promotion convergence rate and effect, especially if sigmoid or tanh when activation primitive, The maximum section of its gradient is near 0, and the variation of sigmoid or tanh is with regard to base when input value very big perhaps very little This is flat, that is, when carry out gradient decline and optimize, gradient can tend to 0, and cause optimal speed very slow;And Tanh function when the default activation function of LSTM, its output area is in [- 1,1], while this is the first choice of time series data Range;Therefore MinMaxScaler class can be used data set is transformed into the range of [- 1,1];Complete the pretreatment of data LSTM model can be then constructed, ready data set is decomposed into training set and test set first, then by training set and survey Examination collection, which is decomposed into, outputs and inputs variable, and by input remodeling at 3D format expected from LSTM, i.e., [sample, time step, feature]; Then LSTM model is defined and is fitted, 50 neurons are defined in the first hidden layer, neuron is defined in output layer For the target signature of prediction, inputting shape will be a time step with all alternative features, while use average absolute The efficient Adam version of error MAE loss function and stochastic gradient descent, it is 50 of 72 which, which will be suitable for batch size, T raining period, finally by fit () function be arranged validation_data () parameter carry out track training during model Training and test loss, in end of run, training and prediction loss are all drawn and come out, as shown in Figure 2;Here in test set On, in order to preferably portray Granger causality power, use R2To measure the effect quality of prediction, formula are as follows:
WhereinIt is the time series predicted, yiThe true value for the time series observed is represented,Indicate this time sequence The mean value of column, therefore, R2It can be construed to change the explained variance of prediction model, when the value more approaches 1, illustrate the prediction of model Effect is better, otherwise then illustrates that the predictive ability of the model is negative if negative value, uses R2More formally define Granger because Fruit relationship;By only being predicted twice with target signature and using all alternative features, predicted twice to calculate in test set On R2Difference, claim its Granger causality score, thus quantify Granger between alternative features and target signature because Fruit relationship is strong and weak.
CN201910242406.3A 2019-03-28 2019-03-28 A kind of causality method for digging based on deep learning Pending CN109993281A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910242406.3A CN109993281A (en) 2019-03-28 2019-03-28 A kind of causality method for digging based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910242406.3A CN109993281A (en) 2019-03-28 2019-03-28 A kind of causality method for digging based on deep learning

Publications (1)

Publication Number Publication Date
CN109993281A true CN109993281A (en) 2019-07-09

Family

ID=67131761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910242406.3A Pending CN109993281A (en) 2019-03-28 2019-03-28 A kind of causality method for digging based on deep learning

Country Status (1)

Country Link
CN (1) CN109993281A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110731787A (en) * 2019-09-26 2020-01-31 首都师范大学 fatigue state causal network method based on multi-source data information
CN111026852A (en) * 2019-11-28 2020-04-17 广东工业大学 Financial event-oriented hybrid causal relationship discovery method
CN112101480A (en) * 2020-09-27 2020-12-18 西安交通大学 Multivariate clustering and fused time sequence combined prediction method
CN113300897A (en) * 2021-06-16 2021-08-24 中移(杭州)信息技术有限公司 Causal relationship identification method, terminal device and storage medium
CN113627663A (en) * 2021-08-04 2021-11-09 浙江大学 Dynamic cause and effect analysis method based on geographic time sequence in city
CN113866391A (en) * 2021-09-29 2021-12-31 天津师范大学 Deep learning model prediction factor interpretation method and application thereof in soil water content prediction
CN117688472A (en) * 2023-12-13 2024-03-12 华东师范大学 Unsupervised domain adaptive multivariate time sequence classification method based on causal structure

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110731787B (en) * 2019-09-26 2022-07-22 首都师范大学 Fatigue state causal network method based on multi-source data information
CN110731787A (en) * 2019-09-26 2020-01-31 首都师范大学 fatigue state causal network method based on multi-source data information
CN111026852B (en) * 2019-11-28 2023-06-30 广东工业大学 Financial event-oriented hybrid causal relationship discovery method
CN111026852A (en) * 2019-11-28 2020-04-17 广东工业大学 Financial event-oriented hybrid causal relationship discovery method
CN112101480A (en) * 2020-09-27 2020-12-18 西安交通大学 Multivariate clustering and fused time sequence combined prediction method
CN112101480B (en) * 2020-09-27 2022-08-05 西安交通大学 Multivariate clustering and fused time sequence combined prediction method
CN113300897A (en) * 2021-06-16 2021-08-24 中移(杭州)信息技术有限公司 Causal relationship identification method, terminal device and storage medium
CN113300897B (en) * 2021-06-16 2022-10-18 中移(杭州)信息技术有限公司 Causal relationship identification method, terminal device and storage medium
CN113627663A (en) * 2021-08-04 2021-11-09 浙江大学 Dynamic cause and effect analysis method based on geographic time sequence in city
CN113627663B (en) * 2021-08-04 2023-11-10 浙江大学 Dynamic causal analysis method based on geographic time sequence in city
CN113866391A (en) * 2021-09-29 2021-12-31 天津师范大学 Deep learning model prediction factor interpretation method and application thereof in soil water content prediction
CN113866391B (en) * 2021-09-29 2024-03-08 天津师范大学 Deep learning model prediction factor interpretation method and application thereof in soil water content prediction
CN117688472A (en) * 2023-12-13 2024-03-12 华东师范大学 Unsupervised domain adaptive multivariate time sequence classification method based on causal structure
CN117688472B (en) * 2023-12-13 2024-05-24 华东师范大学 Unsupervised domain adaptive multivariate time sequence classification method based on causal structure

Similar Documents

Publication Publication Date Title
CN109993281A (en) A kind of causality method for digging based on deep learning
CN110148285B (en) Intelligent oil well parameter early warning system based on big data technology and early warning method thereof
CN111639430B (en) Natural gas pipeline leakage identification system driven by digital twinning
CN101950382B (en) Method for optimal maintenance decision-making of hydraulic equipment with risk control
CN111985610B (en) Oil pumping well pump efficiency prediction system and method based on time sequence data
CN104636751A (en) Crowd abnormity detection and positioning system and method based on time recurrent neural network
CN106199174A (en) Extruder energy consumption predicting abnormality method based on transfer learning
CN114282443B (en) Residual service life prediction method based on MLP-LSTM supervised joint model
CN113554466A (en) Short-term power consumption prediction model construction method, prediction method and device
CN111160659B (en) Power load prediction method considering temperature fuzzification
CN112733440A (en) Intelligent fault diagnosis method, system, storage medium and equipment for offshore oil-gas-water well
CN116610816A (en) Personnel portrait knowledge graph analysis method and system based on graph convolution neural network
Wu MOOC learning behavior analysis and teaching intelligent decision support method based on improved decision tree C4. 5 algorithm
CN116911571A (en) Mine operation and maintenance monitoring system
CN115017513A (en) Intelligent contract vulnerability detection method based on artificial intelligence
CN115481726A (en) Industrial robot complete machine health assessment method and system
Gökçe et al. Performance comparison of simple regression, random forest and XGBoost algorithms for forecasting electricity demand
CN116720098A (en) Abnormal behavior sensitive student behavior time sequence modeling and academic early warning method
CN117077631A (en) Knowledge graph-based engineering emergency plan generation method
CN109635008A (en) A kind of equipment fault detection method based on machine learning
CN109034453A (en) A kind of Short-Term Load Forecasting Method based on multiple labeling neural network
CN118152579A (en) Knowledge-driven multi-parameter electric pump well working condition intelligent diagnosis method
CN115293249A (en) Power system typical scene probability prediction method based on dynamic time sequence prediction
Ban et al. Physics‐Informed Gas Lifting Oil Well Modelling using Neural Ordinary Differential Equations
CN118114812B (en) Shale gas yield prediction method, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190709

WD01 Invention patent application deemed withdrawn after publication