Fault Detection and Isolation Methods in Subsea Observation Networks
<p>Overall system architecture of observation network.</p> "> Figure 2
<p>Diagram of fault detection.</p> "> Figure 3
<p>Flowchart of fault detection.</p> "> Figure 4
<p>Fault detection process based on deep learning.</p> "> Figure 5
<p>Schematic of an observation network with dual shore station mesh topology.</p> "> Figure 6
<p>PSpice simulation model.</p> "> Figure 7
<p>Lumped parameter cascade model of a transmission cable.</p> "> Figure 8
<p>Curve of total cost function <span class="html-italic">Loss.</span></p> "> Figure 9
<p>Prediction chart for fault type classification.</p> "> Figure 10
<p>Regression prediction for fault location.</p> "> Figure 11
<p>Communication protocol definition.</p> "> Figure 12
<p>Time response of a branch unit (BU) at different positions of a 600 km cable.</p> "> Figure 13
<p>Control method of BU.</p> "> Figure 14
<p>Schematic of fault detection experimental platform.</p> "> Figure 15
<p>Fault detection experimental platform.</p> "> Figure 16
<p>Experiment of fault isolation.</p> "> Figure 17
<p>Turning-on command.</p> "> Figure 18
<p>Turning-off command.</p> "> Figure 19
<p>Error parity command.</p> "> Figure 20
<p>Error address command.</p> "> Figure 21
<p>Temperature measurement result of the BU.</p> ">
Abstract
:1. Introduction
2. Overview of the Subsea Observation Network
3. Fault Detection Methods
3.1. System Structure
- (1)
- A system simulation model is built in the PSpice simulation software (Pspice is launched by Cadence in San Jose, CA, USA), and high-impedance faults are simulated at intervals by connecting a series of fault resistors at the fault point. Python is used to preprocess the simulation data (such as the normalization of feature combinations) to obtain the training set and test set data of the DNN.
- (2)
- The TensorFlow deep learning framework is used to build a neural network predictive failure model and perform multiple iterative training optimizations for the training set characteristics. Meanwhile, the test set data are used to perform hyperparameter search tuning on the model to improve the model’s accuracy and generalizability. The trained fault detection model is saved locally to be used on real data.
- (3)
- The voltage and current data of the shore base and junction box collected by the single-chip microcomputer are displayed in real time by the QT host computer (QT is a cross-platform C + + Graphical User Interface(GUI) application development framework developed bythe software company QT in Espoo, Finland in 1991. It can be used to develop both GUI programs and non-GUI programs, such as console tools and servers), and the collected related data are saved locally in text/csv format.
- (4)
- The host computer software calls the Python (Python has become one of the most popular programming languages. It was founded by Guido van Rossum of Amsterdam, the Netherlands) interpreter to run the deep learning script and performs the same processing on the collected data according to the training data preprocessing steps. The combined features are used as input to the already trained deep learning model for prediction. The Qt host computer displays the prediction results on the graphical user interface in real time.
- (5)
- If the system is predicted to have potential high-impedance and open-circuit faults, then the result can also be used as a drive signal for the fault isolation system to perform switching.
3.2. Supervised Learning Feature Engineering
3.3. Training and Optimization of Fault Detection Model
4. Fault Isolation Methods
4.1. Communication Protocol
4.2. Optimal Communication Frequency
4.3. Control of BU
5. Experimental Procedures and Results
5.1. Experiment of Fault Detection
5.2. Experiment of Fault Isolation
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
Appendix A
Algorithm A1. Script of function add_layer. |
def add_layer (inputs, in_size, out_size, n_layer, activation_function = None, BN_able = True, train_phase = True): layer_name = ‘layer%s’ % n_layer with tf.name_scope (layer_name): with tf.name_scope (‘weights’): Weights = tf.Variable (tf.random_normal ([in_size, out_size]), name = ‘W’) tf.summary.histogram (‘Histogram’, Weights) with tf.name_scope (‘biases’): biases = tf.Variable (tf.zeros ([1, out_size]) + 0.1, name = ‘b’) tf.summary.histogram (‘Histogram’, biases) with tf.name_scope (‘Wx_plus_BN_before’): Wx_plus_b = tf.matmul (inputs, Weights) + biases Wx_plus_b = tf.nn.dropout (Wx_plus_b, keep_prob) if BN_able: with tf.variable_scope (layer_name): fc_mean, fc_var = tf.nn.moments (#Get the mean value and variance value of each feature (dimension) Wx_plus_b, axes = [0], # the dimension you want to normalize, here [0] for batch ) with tf.name_scope (‘scale’): scale = tf.Variable (tf.ones ([out_size]), name = ‘scale’) with tf.name_scope (‘shift’): shift = tf.Variable (tf.zeros ([out_size]), name = ‘shift’) epsilon = 0.001 ema = tf.train.ExponentialMovingAverage ( decay = 0.5) def mean_var_with_update (): ema_apply_op = ema.apply ([fc_mean, fc_var]) with tf.control_dependencies ([ema_apply_op]): return tf.identity (fc_mean), tf.identity (fc_var) mean, var = tf.cond (tf.equal (train_phase, True), mean_var_with_update, lambda: (ema.average (fc_mean), ema.average (fc_var))) with tf.name_scope (‘Wx_plus_BN_after’): Wx_plus_b = tf.nn.batch_normalization (Wx_plus_b, mean, var, scale, shift, epsilon) if activation_function is None: outputs = Wx_plus_b else: outputs = activation_function (Wx_plus_b) tf.summary.histogram (‘outputs/Histogram’, outputs) return outputs |
Algorithm A2. Script of cost functions. |
cross_entropy = tf.reduce_mean (-tf.reduce_sum(relative * ys1 * tf.log (tf.clip_by_value (predict1, 1e-10, 1.0)), reduction_indices = 1)) mean_squared = tf.losses.mean_squared_error (output2, ys2) with tf.name_scope (‘loss’): loss = cross_entropy + mean_squared tf.summary.scalar (‘loss’, loss) with tf.name_scope (‘train’): train_op = tf.train.AdamOptimizer (learning_rate = Learning_rate).minimize (loss) with tf.Session () as sess: merged = tf.summary.merge_all () # summary writer goes in here train_writer = tf.summary.FileWriter (“logs/train”, sess.graph) test_writer = tf.summary.FileWriter (“logs/test”, sess.graph) init = tf.global_variables_initializer () sess.run (init) for i in range (TrainingSteps): sess.run (train_op, feed_dict = {xs: trainset, ys: trainlabels, keep_prob: Keep_prob, train_phase: True}) if i % RecordSteps == 0: print (“After %d training step(s):” % i) print (“TrainingSet Cost Value:”, sess.run (loss, feed_dict = {xs: trainset, ys: trainlabels, keep_prob: 1, train_phase: True})) print (“TrainingSet Accuracy:”, compute_accuracy (trainset, trainlabels, True)) print (“TestSet Cost Value:”, sess.run (loss, feed_dict = {xs: testset, ys: testlabels, keep_prob: 1, train_phase: False})) print (“TestSet Accuracy:”, compute_accuracy (testset, testlabels, False)) print () train_result = sess.run (merged, feed_dict = {xs: trainset, ys: trainlabels, keep_prob: 1, train_phase: True}) test_result = sess.run (merged, feed_dict = {xs: testset, ys: testlabels, keep_prob: 1, train_phase: False}) train_writer.add_summary (train_result, i) test_writer.add_summary (test_result, i) |
References
- Chen, Y.H.; Yang, C.; Li, D.; Jin, B.; Chen, Y. Study on 10 kVDC Powered Junction Box for A Cabled Ocean Observatory System. China Ocean Eng. 2013, 27, 265–275. [Google Scholar] [CrossRef]
- Harris, D.W.; Duennebier, F.K. Powering cabled ocean-bottom observatories. IEEE Ocean Eng. 2002, 27, 202–211. [Google Scholar] [CrossRef]
- Kirkham, H.; Howe, B.; Vorpérian, V.; Bowerman, P. The design of the NEPTUNE power system. MTS/IEEE Oceans 2001, 3, 1374–1380. [Google Scholar] [CrossRef]
- Chen, Y.; Xiao, S.; Li, D. Stability analysis model for multi-node undersea observation networks. Simul. Model. Pract. Theory 2019, 97, 101971. [Google Scholar] [CrossRef]
- Tang, L.; Ooi, B.-T. Locating and Isolating DC Faults in Multi-Terminal DC Systems. IEEE Trans. Power Deliv. 2007, 22, 1877–1884. [Google Scholar] [CrossRef]
- Dewey, R.; Tunnicliffe, V. Venus: Future science on a coastal mid-depth observatory. In Proceedings of the 3rd International Workshop on Scientific Use of Submarine Cables and Related Technology, Tokyo, Japan, 25–27 June 2003; pp. 232–233. [Google Scholar] [CrossRef]
- Pawlak, G.; Carlo, E.H.D.; Fram, P.; Hebert, A.B.; Jones, C.S.; McLaughlin, B.E.; McManus, M.A.; Millikan, K.S.; Sansone, F.J.; Stanton, T.P.; et al. Development, deployment, and operation of Kilo Nalu nearshore cabled observatory. In Proceedings of the Oceans 2009–Europe, Bremen, Germany, 11–14 May 2009. [Google Scholar] [CrossRef]
- Schneider, K.; Liu, C.; Howe, M.B. Topology error identification for the NEPTUNE power system. IEEE Trans. Power Syst. 2005, 20, 1224–1232. [Google Scholar] [CrossRef]
- Schneider, K.; Liu, C.; McGinnis, T.; Howe, B.; Kirkham, H. Real-time control and protection of the NEPTUNE power system. In Proceedings of the OCEANS’02 MTS/IEEE, Biloxi, MI, USA, 29–31 October 2002; pp. 1799–1805. [Google Scholar] [CrossRef]
- Lu, S.; El-Sharkawi, M.A. Neptune Power System: Detection and Location of Switch Malfunctions and High Impedance Faults. In Proceedings of the 2006 IEEE International Symposium on Industrial Electronics, Montreal, QU, Canada, 9–13 July 2006; pp. 1960–1965. [Google Scholar] [CrossRef]
- Lu, S. Infrastructure, operations, and circuits design of an undersea power system. Ph.D. Thesis, University of Washington, Washington, DC, USA, 2006. [Google Scholar]
- Chan, T.; Liu, C.; Howe, B.M.; Kirkham, H. Fault location for the NEPTUNE power system. IEEE T Power Syst. 2007, 22, 522–531. [Google Scholar] [CrossRef]
- Chan, T. Analytical Methods for Power Monitoring and Control in an Underwater Observatory. Ph.D. Thesis, SWashington University, Seattle, WA, USA, 2007. [Google Scholar]
- Ye, Q.; Liu, S.; Liu, C. A Deep Learning Model for Fault Diagnosis with a Deep Neural Network and Feature Fusion on Multi-Channel Sensory Signals. Sensors 2020, 20, 4300. [Google Scholar] [CrossRef]
- Kolar, D.; Lisjak, D.; Pająk, M.; Pavković, D. Fault Diagnosis of Rotary Machines Using Deep Convolutional Neural Network with Wide Three Axis Vibration Signal Input. Sensors 2020, 20, 4017. [Google Scholar] [CrossRef] [PubMed]
- Arellano-Espitia, F.; Delgado-Prieto, M.; Martinez-Viol, V.; Saucedo-Dorantes, J.J.; Osornio-Rios, R.A. Deep-Learning-Based Methodology for Fault Diagnosis in Electromechanical Systems. Sensors 2020, 20, 3949. [Google Scholar] [CrossRef]
- Liu, Y.; Yan, X.; Zhang, C.-A.; Liu, W. An Ensemble Convolutional Neural Networks for Bearing Fault Diagnosis Using Multi-Sensor Data. Sensors 2019, 19, 5300. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Perera, N.; Rajapakse, A.D. Fast isolation of faults in transmission systems using current transients. Electr. Power Syst. Res. 2008, 78, 1568–1578. [Google Scholar] [CrossRef]
- Silos-Sanchez, A.; Villafafila-Robles, R.; Lloret-Gallego, P. Novel fault location algorithm for meshed distribution networks with DERs. Electr. Power Syst. Res. 2020, 181, 106182. [Google Scholar] [CrossRef]
- Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Scarlatti, A.; Holloway, C.L. An equivalent transmission-line model containing dispersion for high-speed digital lines-with an FDTD implementation. IEEE Trans. Electromagn. Compat. 2001, 43, 504–514. [Google Scholar] [CrossRef] [Green Version]
- Chen, L.; Qu, H.; Zhao, H. Generalized Correntropy Induced Loss Function for Deep Learning. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 1428–1433. [Google Scholar] [CrossRef]
- Gylberth, R.; Adnan, R.; Yazid, S.; Basaruddin, T. Differentially Private Optimization Algorithms for Deep Neural Networks. In Proceedings of the 2017 International Conference on Advanced Computer Science and Information Systems (ICACSIS), Bali, Indonesia, 28–29 October 2017; pp. 387–394. [Google Scholar] [CrossRef]
- Huang, X.; Yang, X.; Zhao, J.; Xiong, L.; Ye, Y. A new weighting k-means type clustering framework with an l~2-norm regularization. Knowledge-Based Syst. 2018, 151, 165–179. [Google Scholar] [CrossRef]
Training Sample Eigenvalue | ||
---|---|---|
10 terminal voltage and current values | 2 shore station current values | 2 shore station current product values |
Training Sample Eigenvalue | |||
---|---|---|---|
Normal/fault 1/0 | BU Open-circuit fault/normal 1/0 | High-impedance fault/normal 1/0 | Fault location |
Hyperparameter | Function of Parameter |
---|---|
Relative = 100 | Regression of loss function and classification weight of softmax |
DataSplitRate = 0.8 | Data partition ratio of training set and test set |
TrainingSteps = 20,000 | Number of training iterations |
Learning_rate = 0.001 | Learning rate |
Keep_prob = 0.8 | Dropout regularization coefficient |
Layer_dimension = [22, 20, 40, 28] | Number of hidden layers and number of nerve units in each layer |
Test Set | Calibration/Prediction | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
No. | IS | I#1 | V#1 | I#2 | V#2 | I#3 | V#3 | N | K1 | K2 | K3 | G1 | G2 | G3 | D |
1 | 3.59 | 2.05 | 102 | 0.75 | 37.4 | 0.34 | 17.1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 20 |
0.02 | 0.01 | 0.01 | 0.01 | 0.88 | 0.04 | 0.03 | 16.8 | ||||||||
2 | 3.24 | 2.10 | 105 | 0.75 | 37.8 | 0.33 | 16.4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 40 |
0.01 | 0.02 | 0.01 | 0.02 | 0.01 | 0.03 | 0.90 | 38.9 | ||||||||
3 | 2.73 | 2.73 | 136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
0.01 | 0.01 | 0.98 | 0.01 | 0.00 | 0.00 | 0.00 | 1.4 | ||||||||
4 | 3.28 | 2.16 | 105 | 0.82 | 38.6 | 0.82 | 17.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0.91 | 0.01 | 0.01 | 0.01 | 0.02 | 0.02 | 0.02 | 0.7 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xiao, S.; Yao, J.; Chen, Y.; Li, D.; Zhang, F.; Wu, Y. Fault Detection and Isolation Methods in Subsea Observation Networks. Sensors 2020, 20, 5273. https://doi.org/10.3390/s20185273
Xiao S, Yao J, Chen Y, Li D, Zhang F, Wu Y. Fault Detection and Isolation Methods in Subsea Observation Networks. Sensors. 2020; 20(18):5273. https://doi.org/10.3390/s20185273
Chicago/Turabian StyleXiao, Sa, Jiajie Yao, Yanhu Chen, Dejun Li, Feng Zhang, and Yong Wu. 2020. "Fault Detection and Isolation Methods in Subsea Observation Networks" Sensors 20, no. 18: 5273. https://doi.org/10.3390/s20185273