Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Heterogeneous Transfer Learning for Wi-Fi Indoor Positioning Based Hybrid Feature Selection
Previous Article in Journal
A Case Study on Vestibular Sensations in Driving Simulators
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semi-Supervised Domain Adaptation for Multi-Label Classification on Nonintrusive Load Monitoring

1
Department of Computer Engineering, Inha University, Inha-ro 100, Nam-gu, Incheon 22212, Korea
2
Electronics and Telecommunications Research Institute (ETRI), 218 Gajeong-ro, Yuseong-gu, Daejeon 34129, Korea
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(15), 5838; https://doi.org/10.3390/s22155838
Submission received: 31 May 2022 / Revised: 15 July 2022 / Accepted: 3 August 2022 / Published: 4 August 2022
(This article belongs to the Section Electronic Sensors)

Abstract

:
Nonintrusive load monitoring (NILM) is a technology that analyzes the load consumption and usage of an appliance from the total load. NILM is becoming increasingly important because residential and commercial power consumption account for about 60% of global energy consumption. Deep neural network-based NILM studies have increased rapidly as hardware computation costs have decreased. A significant amount of labeled data is required to train deep neural networks. However, installing smart meters on each appliance of all households for data collection requires the cost of geometric series. Therefore, it is urgent to detect whether the appliance is used from the total load without installing a separate smart meter. In other words, domain adaptation research, which can interpret the huge complexity of data and generalize information from various environments, has become a major challenge for NILM. In this research, we optimize domain adaptation by employing techniques such as robust knowledge distillation based on teacher–student structure, reduced complexity of feature distribution based on gkMMD, TCN-based feature extraction, and pseudo-labeling-based domain stabilization. In the experiments, we down-sample the UK-DALE and REDD datasets as in the real environment, and then verify the proposed model in various cases and discuss the results.

1. Introduction

Understanding energy usage in buildings has been considered an important issue because residential and commercial power consumption account for about 60% of global energy consumption [1]. Optimized energy usage management has advantages for both suppliers and consumers of energy. From the supplier’s point of view, planned consumption may be encouraged according to the frequency and pattern of use of home appliances. In addition, it is also easy for consumers to develop plans that can reduce costs through comprehensive information about device-specific operations [2]. The electricity usage profile is to install a submeter for each appliance and record instantaneous power readings, but in reality, applying this method to all devices is difficult to realize due to cost and difficulty in maintenance. Therefore, nonintrusive load monitoring (NILM) aims to disaggregate energy consumption by device. The NILM method that does not depend on submeters has shown significant efficiency in commercial and residential energy utilization and remains an important task [3].
NILM is inherently difficult because it analyzes information about the simultaneous switching or noise generation of multiple devices without attaching multiple submeters [4,5,6]. To solve the problem, many techniques such as dynamic time warping (DTW), matrix factorization, neuro-fuzzy modeling, and graph signal processing (GSP) have been proposed and supervised and unsupervised learning-based techniques have been studied [7,8,9]. Hart [10] first introduced unsupervised learning methods to decompose electrical loads through clustering. However, various techniques such as hidden Markov models (HMM) have been proposed for a while because clustering-based methods do not have training data and are difficult to predict accurate power loads. In recent years, the number of research studies on deep neural networks (DNNs) has increased rapidly with the advancement of high-end hardware devices, and the availability of data for supervised learning has increased [11]. Long short-term memory (LSTM), a representative supervised learning technology, considered NILM as a prediction problem based on time series data. Refs. [12,13] proposed a method for learning models by controlling data applied with various data sampling-based windows. Nolasco et al. [14] included multi-label procedures to increase the recognition rate for multi-loads by marking on loads at any given time and developed architectures based on convolutional neural networks (CNNs), resulting in an outstanding performance in signal detection and feature extraction. However, existing supervised learning methods for NILM still have two problems. First, there is a fundamental problem that assumes that the power usage data of real devices has a distribution similar to that of training data. It is impossible to ensure the same performance in actual situations because devices of the same type have different energy consumption depending on products and brands, noise form, intensity, physical environment, etc. [15]. To overcome this problem, training data containing all domain information must be acquired, but it is practically impossible since collecting the energy consumption of each device from different houses requires huge costs. Another problem is that, even assuming that neural network models are trained on all the data for different environments, extracting critical information is very difficult because of the vast amount of complex data [16,17,18]. Therefore, identifying suitable techniques that can handle the large complexity of data and generalize various domains of information is the main challenge in NILM.
To solve these problems, we consider domain adaptation [19,20]. Domain adaptation is one of the transfer learnings, which can adapt the trained model to the other domain dataset on the same task. This concept can easily be applied to the NILM system. Many researchers proposed domain adaptation systems to generalize various domain information [21,22]. Liu et al. [21] conducted a regression study to refine energy consumption by applying the most typical domain adaptation method to NILM. Since only the basic concept of domain adaptation has been applied to NILM, it has the potential to develop in various ways. Ref. [22] proposed a method that incorporates the mean teacher method into domain adaptation. Regression work is performed on the source and target domains using one model. However, this method did not show good performance in domain generalization due to its shallow model structure. To the best of our knowledge, there are no papers on classification tasks in domain adaptation studies for NILM. In this paper, we perform classification tasks for device usage detection in NILM by incorporating powerful feature information distillation based on the teacher–student structure and pseudo-labeling (PL) into domain adaptation.
The main contents of this paper are as follows:
  • We conduct the first classification study in the domain adaptation field for NILM;
  • We show performance improvements by incorporating robust feature information distillation techniques based on the teacher–student structure into domain adaptation;
  • The decision boundaries are refined through PL-based domain stabilization.
The remainder of this paper is organized as follows. Section 2 shows a brief review of related studies of NILM and domain adaptation. Section 3 introduces the proposed method. Section 4 presents the experimental setup, case study, and discussions. Finally, Section 5 concludes the paper.

2. Related Work

2.1. Nonintrusive Load Monitoring

Consider a building with m appliances and k operating power modes of each appliance for time 1 ,   , T . Let x i = ( x i 1 , x i T denote the energy consumption of the i -th device. The whole energy usage of the i -th device in sample time n can be formulated as follows:
x i n = U 1 i ) ,   , U k i n ψ 1 i ψ k i + ϵ i n
where ψ k i is the electricity consumption consumed in a particular operating mode, ϵ i n denotes the measurement of background noises, and U k i n is the operating On/Off 0 , 1 status of the i -th appliance in time n . The operating status assures the equality constraint j = 1 k U j i n = 1 since all appliances operate in a single mode. At time n, the final energy aggregate of the house is expressed as follows:
x n = i = 1 m n ,   , U k i n ψ 1 i ψ k i + ϵ i n
The goal of the NILM algorithm is to disaggregate the measured electricity usage x to generate appliance-specific energy consumption profiles [23,24]. Therefore, the final challenge is to reduce the difference between the actual measurements of the device and the disaggregated energy consumption [25].
Elafoudi et al. [7] detected the edge within the time window and used DTW to identify the unique load signature. Lin et al. [8] proposed a hybrid classification technique that combines fuzzy c-means and clustering piloting with neuro-fuzzy classification to distinguish devices that have similar load signatures. He et al. [9] handled the NILM as a single-channel blind source separation problem to perform low-complexity classification active power measurements. Based on this idea, they proposed the GSP-based NILM system to handle the large training overhead and the computational cost of the conventional graph-based method.

2.2. Domain Adaptation

Domain adaptation is an area of transfer learning [26]. In general transfer learning, a task or domain can be changed from source to target; however, in domain adaptation, the task sets the premise that only the domain is changed [19,27]. This aims to generalize the classification or regression model, which is trained on the source domain to be applied to the target domains with different distributions, since distribution disagreement between training and real data yields poor model performance. Ganin et al. [19] proposed a multi-task learning model with a class and domain classifier. The model was trained to only classify class labels, except for domain labels. For this, they introduced the gradient reversal layer (GRL) to the domain classifier. The GRL multiplies the negative constant and the gradient on the backward pass. Additionally, it makes the model remove the domain information in its feature extractor. With the advancement of deep neural networks (DNNs), the performance of domain adaptation has achieved outstanding performance in various fields [11,14,28,29,30,31,32,33]. In [34], domain adversarial training of neural networks (DANN), inspired by the generative adversarial network (GAN), laid the foundation for applying adversarial learning methodologies to domain adaptation and accomplished excellent performance. In addition, domain adaptation algorithms based on maximum mean discrepancy (MMD) between source and target were mainly studied [35,36,37,38]. In [39], Long et al. proposed a joint MMD to adjust the joint distribution. Deep domain confusion (DDC) [34] proposed a technique for using pre-trained networks by adding adaptive layers based on MMD.
Although domain adaptation is used in various fields as expressed above, the application of domain adaptation in NILM has not been researched a lot and requires advancement. In [40], Wan proposed a domain adaptation algorithm for optical character recognition (OCR), which was extended to apply it to the NILM field and produce prominent results. Recently, Wang and Mao proposed applying a model-agnostic meta-learning (MAML)-based domain adaptation algorithm to NILM, inspired by the pre-trained model, which is heavily studied in the NLP field and outperformed the state-of-the-art deep learning-based methods. [41].

3. Semi-Supervised Domain Adaptation for Multi-Label Classification on Non-Intrusive Load Monitoring

Various deep learning models are applied to the NILM field. However, the task of segmenting the use of different devices in many houses is still a relatively new concept. To solve this problem, we propose the semi-supervised domain adaptation for multi-label classification on non-intrusive load monitoring. The overall diagram is shown in Figure 1.
Several hypotheses are proposed in this work to apply semi-supervised domain adaptation to NILM. The first hypothesis is that the distribution of source and target domains is different. Most NILM systems are based on this hypothesis. We also use labels on the source for domain adaptation, not on the target. Second, even if the distribution of source domain data and target domain data is different in NILM, it is assumed that the same device has domain-independent common characteristics regardless of the domain. Because in motor devices, lagging current with slow current flow occurs, which results in a low power factor. Additionally, capacitor devices generate leading current with fast current flow, which results in a high-power factor. The power factor is the ratio of active power to apparent power regardless of the magnitude of power consumption. In other words, if two different houses use the same electronic devices (e.g., refrigerator, TV, etc.) from different manufacturers, it is assumed that there is a common usage pattern even if the power consumption is different.
The proposed method consists of three main steps, shown in Figure 2. In the knowledge distillation stage, high-level knowledge is distilled into the student network (SN) by a temporal convolutional network (TCN) [42]-based teacher network (TN) [43] trained using labeled source data. Domain-dependent features vary depending on the domain, and domain-independent features are constant regardless of the domain. In the next step, we perform a robust domain adaptation that allows us to extract only domain-independent features to adapt source and target data to neural networks regardless of domain. Appliance usage detection classifies devices from source domain data. Additionally, domain classifiers are trained with GRL to prevent classification for source and target domains. As a result, feature extractors can extract robust domain-independent features that enable device usage classification regardless of domain. In the domain stabilization step in Figure 2, we stabilize the domain through PL-based fine-tuning. First, domain-independent features of target data are extracted from the feature extractor and then pseudo-labeled based on the source domain label in appliance usage detection. Since all the target data cannot be pseudo-labeled, it is partially pseudo-labeled. Therefore, the target data consists of pseudo-labeled data and unlabeled data. Secondary domain adaptation is performed based on the enhanced target domain data and domain-independent features extracted through robust distillation. The network performance is stabilized and improved through the advantages of low-density separation between classes and entropy regularization. Details of each part of the proposed framework are in the subsections.

3.1. Network Architecture

The goal of this section is to build a semi-supervised domain adaptation model that can estimate the target domain label Y t using labeled source data X s ,   Y s and target data X t . As shown in Figure 2, the model includes three parts: knowledge distillation, robust domain adaptation, and domain stabilization. Details of the network structure are as follows:
(1)
Knowledge distillation: knowledge is distilled using a TCN feature extraction-based teacher–student network to receive robust domain-independent features of source data. TCN is an extended time-series data modeling structure in CNN. It provides better performance than typical time-series deep learning models such as LSTM because it has a much longer and more effective memory without a gate mechanism. The TCN consists of several residual blocks, and this block consists of a dilated casual convolution operation O. For input x n and filter f t   : 0 , 1 , k 1 , O at point s is defined by Equation (3).
O s = x d f t s = i = 0 k 1 f t i · x s d · i
where d is the dilated factor, d is d -dilated convolution, k is the filter size, and s d · i is the past value. However, as the network depth increases, performance decreases rapidly due to overfitting. However, as the network depth increases, performance decreases quickly due to overfitting. Resnet’s key concept, namely, residual mapping, can solve this problem. The TCN residual block includes two layers of dilated casual convolution based on the ReLU activation function, weighted normalization, and dropout. The 1 × 1 convolution layer on the TCN ensures that the input and output are the same size. The output of the transformation T of the time series data in the TCN dual block is added to the identity mapping of the input x and expressed as follows:
R s = T x , θ + x
where θ means the set of parameters of the network. It has already been demonstrated that this concept of residual block improves network performance by learning modifications to identity mapping rather than overall transformations. Based on this, it is possible to build a deep TCN network by stacking multiple TCN residual blocks. Assuming that x I is the input of the I -th block, the network forward propagation from the I-th block to the I + n -th block can then be formulated as follows:
x I + n = x I + i = I I + n 1 T x i , θ i
where, x I is I -th block, θ i is the parameter set of the i -th block. Therefore, the feature extractor F E t e x s ,   θ f t e of the TN is defined as follows:
F E t e x s ,   θ f _ t e = x s + i = 0 k 1 T x s _ i , θ f _ t e _ i
where the number of layers is k , x s is source data, θ f t e is the parameter set of TN, x s i is i t h source data, θ f t e i is the parameter set of i t h block in the TN. Additionally, the feature extractor F E s t x s ,   θ f s t of SN can be defined as follows:
F E s t x s ,   θ f _ s t = x s + i = 0 l 1 T x s _ i , θ f _ s t _ i
where l is the number of layers, θ f s t is the parameter set of SN, θ f s t i is the parameter set of i t h block in SN. Based on f t e extracted from Equation (6), the TN must extract soft label information for transferring knowledge to the SN through appliance usage detection, which consists of a fully connected layer. The output y ^ t e of the TN is defined as follows:
y ^ t e = S o f t m a x w i t h   T A U D t e f t e ,   θ t e i = e A U D t e f t e ,   θ t e i T j = 1 K e A U D t e f t e ,   θ t e j T
where t e refers to the TN, y ^ t e is a predicted classification label of x s in the TN, T is a temperature parameter, S o f t m a x w i t h   T   i s a S o f t m a x function with a temperature parameter. θ t e is the parameter set of A U D t e , A U D t e f t e ,   θ t e i is the elements of output vector of A U D t e , i refers to i t h element, K is the number of elements of the output vector. Maximize the benefits of soft label values for knowledge distillation by using temperature parameters to prevent information loss in S o f t m a x output. The estimated soft label y ^ t e is compared to the soft prediction y ^ s t s p of SN and is used as a distillation loss in network training. y ^ s t s p is obtained as follows:
y ^ s t _ s p = S o f t m a x w i t h   T A U D s t f s t _ s ,   θ s t i = e A U D s t f s t _ s ,   θ s t i T j = 1 K e A U D s t f s t _ s ,   θ s t j T
where s t refers to SN, y ^ s t s p is a predicted classification label of x s in the SN and a soft prediction value of SN, θ s t is the parameter set of A U D s t , and A U D s t f s t s ,   θ s t i is the i-th element of the output vector of A U D s t . The classification performance of SN should be evaluated along with knowledge distillation. The performance can be evaluated by comparing the hard prediction y ^ s t h p of SN with the ground truth y s of the source domain data. y ^ s t h p is obtained as follows:
y ^ s t _ h p = S o f t m a x A U D s t f s t _ s ,   θ s t i = e A U D s t f s t _ s ,   θ s t i j = 1 K e A U D s t f s t _ s ,   θ s t j
where y ^ s t h p is a predicted classification label of x s in the SN and is used as a hard prediction value of SN. In Equation (10), the temperature parameter is not used.
(2)
Robust domain adaptation: domain adaptation is performed with robust features extracted with knowledge distillation to obtain domain-independent features. Domain adaptation consists of the following three stages: feature extractor, domain classifier, and appliance usage detection. First, a feature extractor F E s t   of SN is used. A feature extractor F E s t x s ,   θ f s t of the source data and an F E s t x t ,   θ f s t of the target data share a parameter set. Models learned with only source data are difficult to express with target data. To adapt the target domain data representation to F E s t , the model learns the feature distribution difference between the two domains using MMD and minimizes it. The MMD distance is obtained as follows:
M M D X s , X t = 1 n s i = 1 n s φ x s i 1 n t j = 1 n t φ x t j H
where φ is a feature space mapping function that turns the original feature space into the reproducing kernel Hilbert space H. Further descriptions of the kernel are given in the following subsection. The domain classifier D C f ,   θ d c learns by setting the ground truth values of the source domain data and the target domain data d c s = 0   and   d c t = 1 , respectively, to separate the domain-independent features from the feature extractor. D C f ,   θ d c   has an output d c ^ s   for source domain data and an output d c ^ t   for target domain data. The two outputs are defined as follows:
d c ^ s = S o f t m a x D C f s t _ s ,   θ d c
d c ^ t = S o f t m a x D C f s t _ t ,   θ d c
where f s t s is the source domain feature, f s t t   is the target domain feature and θ d c is the parameter set of DC. d c ^ s   and d c ^ t values between 0 and 1. D C can obtain domain-independent features from F E s t by learning that the source and target domains cannot be classified. Appliance usage detection uses the A U D s t of SN. A U D s t verifies classification performance using source data as input domain-independent features. The prediction of device usage detection can be obtained using Equation (10). In network inference, prediction of the target domain may be obtained using Equation (14).
y ^ t = S o f t m a x A U D s t f s t _ t ,   θ s t
where y ^ t   is the prediction of target data. Detection performance for target domain data is evaluated by comparing y ^ t with ground truth y t of target domain data.
(3)
Domain stabilization: The target domain data is pseudo-labeled with A U D s t to enhance the data, thereby stabilizing the domain and improving the performance of the network. First, the feature f st t   of the target domain data x t is input to the A U D s t . If S o f t m a x A U D s t f s t t ,   θ s t is obtained through Equation (14), PL is generated as a prediction value having the highest probability among S o f t m a x values. However, if the probability is lower than the threshold, the data is not pseudo-labeled. The threshold is obtained experimentally. Domain stabilization consists of three steps, such as feature extraction and domain classifier. Appliance usage detection uses the following three types of data: source data ( X s , Y s ), pseudo-labeled target data ( X t , Y t l ), and unlabeled target data X t . For feature extraction, f st s , f st tl and f st t are output through F E s t . DC has no change in the domain, f st s , f st tl and f st t are classified as inputs, as in Equations (12) and (13). The appliance usage detection performs AU D s t f s t s , f s t t l ;   θ s t .

3.2. Network Losses

We carefully design network losses to obtain domain-independent features from feature distributions. We divide the network loss into the following four stages: knowledge distillation loss, feature distribution difference loss, domain classification loss, and appliance usage detection loss.
(1)
Knowledge distillation loss: As shown in Figure 1, the knowledge distillation phase loss is the sum of the distillation loss L d s and student loss L d s . L d s is to include the difference in the classification results of the TN and the SN in the loss. L d s is defined as follows:
L d s = 2 α T 2 L c e e A U D t e f t e ,   θ t e i T j = 1 K e A U D t e f t e ,   θ t e j T , e A U D s t f s t s ,   θ s t i T j = 1 K e A U D s t f s t s ,   θ s t j T           = 2 α T 2 L c e S o f t m a x w i t h   T A U D t e f t e ,   θ t e i , S o f t m a x w i t h   T A U D s t f s t s ,   θ s t i                    = 2 α T 2 L c e y ^ t e , y ^ s t _ s p
where L c e is the cross-entropy loss and α is the learning rate. The cross-entropy loss is calculated about teacher and student output. If the classification results of the teacher and the student are the same and distillation is good, L d s takes a small value. Additionally, L s t means the cross-entropy loss of the classification of SN. L s t is defined as follows:
L s t = 1 α L c e e A U D s t f s t s ,   θ s t i j = 1 K e A U D s t f s t s ,   θ s t j , y s = 1 α L c e S o f t m a x A U D s t f s t s ,   θ s t i , y s           = 1 α L c e y ^ s t _ h p , y s
Even in a network with relatively fewer parameters than in the TN, L s t is also reduced when L d s is smaller, so it shows good feature extraction and classification performance.
(2)
Feature distribution difference loss: As shown in Figure 1, the feature distribution difference loss is MMD Loss [44] L f . L f estimates the difference between the feature distribution of the source domain data X s and the feature distribution of the target domain data X t through MMD. L f is generally defined as follows:
L f f s t _ s , f s t _ t = M M D 2 f s t _ s , f s t _ t = E X s ~ f s t _ s φ X s E X t ~ f s t _ t φ X t H 2           = E X s ~ f s t s φ X s , E X s ~ f s t s φ X s H + E X t ~ f s t t φ X t , E X t ~ f s t t φ X t H           2 E X s ~ f s t _ s φ X s , E X t ~ f s t _ t φ X t H
For the mapping function φ of Equation (17), we use kernel tricks because computational resources are required too much to obtain all the moments. We utilize the Gaussian kernel as shown in Equation (18).
g k x , y = e x p x y 2 2 σ 2
where g k is the Gaussian kernel. In the Equation (18), Taylor’s development of the exponential develops as in Equation (19).
e x = 1 + x + 1 2 ! x 2 + 1 3 ! x 3 +
Since Equation (19) contains all the moments for x, we use the Gaussian kernel. G k x , y   is derived as Equation (20).
g k x , y = φ x , φ y H
When Equation (15) is organized using Equation (20), L f is re-formulated as shown in Equation (21).
L f f s t _ s , f s t _ t = E X s ~ f s t _ s φ X s , E X s ~ f s t _ s φ X s H + E X t ~ f s t _ t φ X t , E X t ~ f s t _ t φ X t H           2 E X s ~ f s t s φ X s , E X t ~ f s t t φ X t H           = E X s X s ~ f s t s g k X s , X s + E X t X t ~ f s t t g k X t , X t 2 E X s ~ f s t _ s , X t ~ f s t _ t g k X s , X t
(3)
Domain classification loss: As shown in Figure 1, the domain classification loss L d c is related to F E s t and D C . D C f ,   θ d c is modeled so that the source domain and the target domain cannot be distinguished. To minimize the distribution difference between f st s and f st t , the loss of D C f ,   θ d c should be maximized. Using d c ^ s and d c ^ t of D C f ,   θ d c , cross-entropy loss as a binary classifier-based L d c can be obtained as Equation (22).
L d c x s , x t ; θ f _ s t , θ d c = i = 1 s n log 1 d c ^ s i + log d c ^ t i
where, s n is the sample number of mini-batch.
(4)
Appliance usage detection loss: as shown in Figure 1, the appliance usage detection loss uses L s t in the domain adaptation phase and L a u d in the robust domain adaptation phase. Since both losses are applied to the same AU D s t , the same loss equation is formularized as in Equations (23) and (24).
L s t = L c e S o f t m a x A U D s t f s t _ s ,   θ s t i , y s
L a u d = L c e S o f t m a x A U D s t f s t _ s ,   θ s t i , y s + L c e S o f t m a x A U D s t f s t _ t l ,   θ s t i , y t
Each neural network is learned by differentiating loss with corresponding weights, as shown in the dotted line in Figure 1.

3.3. Training Strategy

According to the network loss discussed above, the final optimization objective can be expressed as follows:
θ f _ s t , θ s t , θ d c = arg min L a u d + L d c + L f  
Assuming that θ f _ t e , θ t e are pre-learned high-performance networks, they do not perform additional learning to reduce network loss. When we learn L d c of Equation (22), we apply the gradient reversal layer (GRL) to learn in a direction that fails to classify domains. The pseudo-code of the proposed model is summarized in Algorithm 1.
Algorithm 1: Parameter optimization procedure of the proposed method.
Input :   The   source   domain   data   x s , y s ,   The   target   domain   data   x t with M total samples, respectively.
Output :   The   optimized   parameters   θ f s t , θ s t , θ d c
# Knowledge Distillation Phase
for m = 0 to epochs do
for n to minibatch do
  #Foward propagation
  Teacher: f t e F E t e x s ,   θ f _ t e , y ^ t e A U D t e f t e ,   θ t e
  Student: f st _ s F E s t x s ,   θ f _ s t , y ^ st _ sp AU D s t f s t _ s ,   θ s t , y ^ s t _ h p AU D s t f s t _ s ,   θ s t
   L d s y ^ t e , y ^ st _ sp = 2 α T 2 L c e y ^ t e , y ^ s t _ s p , L s t y ^ s t _ h p , y s = 1 α L c e y ^ s t _ h p , y s
L L d s + L s t
   #Back propagation
   θ f _ s t ,   θ s t Adam θ L ,   θ f _ s t ,   θ s t    
end for
end for
# Domain Adaptation Phase
for m = 0 to epochs do
for n to minibatch do
  #Foward propagation
  Source: f st _ s F E s t x s ,   θ f _ s t ,     d c ^ s D C f st _ s ,   θ d c ,     y ^ s t _ h p A U D s t f s t _ s ,   θ s t  
  Target: f st _ t F E s t x t ,   θ f _ s t , d c ^ t D C f st _ t ,   θ d c
L f f s t _ s , f s t _ t = E X s X s ~ f s t s g k X s , X s + E X t X t ~ f s t t g k X t , X t 2 E X s ~ f s t s , X t ~ f s t t g k X s , X t ,  
L d c x s , x t ; θ f _ s t , θ d c = i = 1 s n log 1 d c ^ s i + log d c ^ t i ,  
L s t f s t _ s ,   θ s t = L c e S o f t m a x A U D s t f s t _ s ,   θ s t , y s  
L L f + L d c + L s t  
  #Back propagation θ f _ s t , θ s t ,   θ d c Adam θ L ,   θ f s t ,   θ s t ,   θ d c    
end for
end for
# Robust Domain Adaptation Phase
#Pseudo labeling
f st _ t F E s t x t ,   θ f _ s t , y t l   A U D s t f st _ t ,   θ s t
for m = 0 to epochs do
for n to minibatch do
  #Foward propagation
  Source: f st _ s F E s t x s ,   θ f _ s t ,     d c ^ s D C f st _ s ,   θ d c ,     y ^ s t _ h p A U D s t f s t _ s ,   θ s t  
  Target: f st _ t F E s t x t ,   θ f _ s t , d c ^ t D C f st _ t ,   θ d c  
  Pseudo Target: f st _ tl F E s t x t ,   θ f _ s t ,       y ^ s t _ t l A U D s t f st _ tl ,   θ s t  
L f f s t _ s , f s t _ t = E X s X s ~ f s t s g k X s , X s + E X t X t ~ f s t t g k X t , X t 2 E X s ~ f s t s , X t ~ f s t t g k X s , X t ,  
L d c x s , x t ; θ f _ s t , θ d c = i = 1 s n log 1 d c ^ s i + log d c ^ t i ,  
L a u d f s t s , f s t t l ;   θ s t  
        = L c e S o f t m a x A U D s t f s t _ s ,   θ s t i , y s + L c e S o f t m a x A U D s t f s t _ t l ,   θ s t i , y t l  
L L f + L d c + L a u d  
  #Back propagation 
   θ f _ s t , θ s t ,   θ d c Adam θ L ,   θ f s t ,   θ s t ,   θ d c    
end for
end for 
   θ f s t , θ s t , θ d c

4. Experiments

4.1. Data Preparation

4.1.1. Dataset

Two publicly available NILM datasets, UK-DALE [45] and REDD [46], were used for performance evaluation. UK-DALE collected smart meter data from five UK buildings, with sampling resolution and corresponding device-level consumption of 1 s and 6 s, respectively, for the total home consumption. The data set was recorded for 39–600 days. REDD was collected from six actual buildings in the United States. The measurement period is between 3 and 19 days, consisting of appliance-level energy consumption data sampled every 3 s and total measurements sampled every 1 s.
This article analyzes the use of the following five representative house appliances: dishwasher (DW), refrigerator (FG), kettle (KT), microwave (MV), and washing machine (WM). Since REDD does not have kettle data, NILM uses four house appliances, excluding kettles. The selected electronic products exhibit various power patterns and power levels. FG generally consumes constant and low power; however, other power consumption is very high power. DW and WM have very complex power usage patterns and power strengths. MV and KT have very monotonous power usage patterns. These five home appliances are generally designated as representative research targets because they account for most of the power consumption in the building.
In UK-DALE, House 1 uses data collected for 74 days from 1 January 2013 to 15 March 2013, and House 2 uses data collected for 74 days from 1 June 2013 to 13 August 2013. In REDD, House 1 and House 3 use data collected over 39 days from 17 April 2011 to 25 May 2011.

4.1.2. Data Preprocessing

Each power consumption of the two datasets is downsampled to 1 min and then preprocessed for missing values using linear interpolation. Each house appliance is classified as ON (1) if the power consumption (for 15 min) is greater than the experimentally set threshold and is classified as OFF (0) if it is less than the threshold. Figure 3 and Figure 4 show the power usage of each home appliance in UK-DALE and REDD, respectively, and the thresholds for determining the ON event accordingly. The threshold was experimentally determined to be sure to include all ON states. However, since the FG is continuously operating, the threshold was determined based on the state in which the motor was running. Table 1 shows the exact threshold value of each home appliance and the number of ON events determined accordingly. The split ratio of training, validation, and test data are 6:2:2. The sliding window is used for around 15 min based on the ON event. A sliding window W with a stride length l s runs the sequence forward to obtain an input sample x = x 1 , x 2 , , x W . For each i t h window, the network has y i = y D W i , y F G i , y K T i , y M V i , y W M i as output power.

4.2. Experimental Setup

4.2.1. Implementation Configuration

To obtain an input sample, W is set to 15, and l s is set to 15 so that data is non-overlapped. In the TN, there are 3.2 times more parameters in the feature extractor and 1.6 times more parameters in the fully connected layer compared to the SN. The epochs in the robust domain adaptation and the domain stabilization phases are not set separately because the early stopping parameter automatically controls learning. The basic structure of SN is cited in [20]. The TN is experimentally determined to have a structure approximately twice as large as the SN. The mini-batch size is set to the maximum value applicable in the experimental environment. The decaying learning rate is used to determine the optimal value by repeatedly reducing it by one-third. The parameters of the proposed model are listed in Table 2.
All experimental models were modified and executed in Python 3.6 [47] and the Pytorch framework [48], and learning and inferencing used the NVIDIA RTX 2070 SUPER.

4.2.2. Ablation Study Methods

Our model consists of the following four main techniques: TCN, gkMMD, teacher–student (TS) structure, and PL. We introduce an ablation study on five methods to investigate how individual components influence performance improvements in the proposed model.
  • Baseline: Typical domain adaptation method with BiLSTM-based feature extractors;
  • TCN-DA: Domain adaptation method with TCN-based feature extractor;
  • gkMMD-DA: Domain adaptation method with Gaussian kernel trick-based MMD Loss in baseline;
  • TS-DA: A domain adaptation method for extracting features based on the robust knowledge distillation of the teacher–student structure. The feature extractor of SN used BiLSTM, such as the baseline, and the feature extractor of TN used BiLSTM, which is four times the size of the student;
  • PL-DA: How to perform domain optimization with pseudo labeling on baseline method

4.2.3. Evaluation Metrics

Performance evaluation uses the F1-score, a general metric. The F1-score is derived as shown in Equation (26).
F 1 T P , F P , F N = 2 1 T P T P + F P + 1 T P T P + F N = 2 T P 2 T P + F P + F N
where T P is true positive, F P is false positive, and F N is false negative.
To the best of our knowledge, there is no low sampling-based classification study in the domain adaptation field for NILM. Therefore, we did not conduct a one-on-one comparison with other studies.

4.3. Case Studies and Discussions

In this section, we conduct an experiment assuming two cases. In the first case, a house was designated as a source domain and a house was designated as a target domain within the same dataset. The second case was experimented with by specifying a source domain and a target domain between different datasets. Table 3, Table 4 and Table 5 show the F1 scores of domain adaptations for six segmentation methods. The ‘Improvement’ row shows how much the proposed method has improved. In addition, experiments on ablation studies are included, indicating how much each method affects overall performance.

4.3.1. Domain Adaptation within the Same Dataset

In this subsection, experiments are carried out on the first case described above. In Table 3, U 1 denotes House1, U 2 denotes House2, R 1 denotes House1 of REDD, and R 3 denotes House3 of REDD. There is no result for the appliances since REDD does not have a kettle, and DW is not used in R 3 .
Based on the baseline, TCN-DA was the method that had the most influence on performance except for our method, showing an average performance improvement of 3.38%. Next, TS-DA showed a performance improvement of 2.45%. In the case of gkMMD-DA, there was a bit of performance improvement or slightly reduced performance. Table 4 shows F1 score for TCN and gkMMD. gkMMD generally helps improve the performance when used with networks with residual blocks. PL-DA showed an average performance stabilization of 0.51% because it learns models in the direction of stabilizing the domain by finetuning the network. Our method showed a significant performance improvement of 6.03% on average compared to the baseline.

4.3.2. Domain Adaptation between Different Datasets

In this subsection, experiments are performed on the second case described above. In Table 5, UK-DALE → REDD is an experiment using UK-DALE as a source domain and REDD as a target domain, and REDD → UK-DALE is an experiment using REDD as a source domain and UK-DALE as a target domain.
In the second case experiment, the average performance is improved by 5.74% even though the degree of domain characteristic change is greater than that of the first case experiment. Although the domains are different, the same type of appliance has almost the same pattern as the power usage, so the domain adaptation is well performed. Therefore, we have confirmed the possibility that in the field of NILM, we do not have to learn new neural networks even if each household and living area are different. Our method shows better results compared to the baseline.
Experiments show that domain adaptations within the same dataset perform well when the proposed method is used, and performance improvements can also be seen for domain adaptations between different datasets. It is a very significant result that our method without individual model learning for all households achieves a performance improvement of 5–6% through only one learning. There are several main reasons for improving accuracy. (1) Rich domain independent feature information is extracted by learning through teacher–student-based knowledge distillation. (2) By using TCN residual blocks and gkMMD together can effectively reduce the distribution mismatch between the two domains. (3) PL can stabilize the network’s decision boundaries.

4.3.3. Discussions

The proposed model can automatically track the use of individual appliances under full load. We look at a series of our method-based applications for elderly households living alone and public electricity management institutions.
In the case of elderly households living alone, the risk of dying alone is generally very high. This risk situation is one of the critical problems to be solved at the government level. By analyzing device usage patterns, it is possible to develop a household risk detection system through abnormality detection in the household. Efficient energy management is an essential issue in public electricity management institutions. It is possible to develop an energy management system that adjusts the power generation ratio by identifying and managing energy-inefficient customers using home appliance usage patterns and power usage.
There are several limitations to the proposed method. (1) Domain adaptation is difficult to apply if house appliances of source and target data are different. (2) The difference in power usage between households is so large that the data imbalance is severe. (3) Although performance is improved by reducing distribution differences over the source and target features, there is no clear academic basis for extracting domain-independent features by reducing distribution differences. It is generally on an experimental basis. In future work, we aim to address the second limitation, which is the data imbalance. Data imbalance is the most fundamental problem in neural network training. Future work is planned in the direction of GAN-based sampling methods to resolve data imbalance or networks that perform high-quality learning despite data imbalance.

5. Conclusions

We developed a novel methodology that combines robust knowledge transfer and network stabilization for NILM to improve previous tasks and perform generalization across domains. The proposed method improves the detection performance of device usage for unlabeled target domain data by using a network trained only on the labeled source data. Teacher–student-based knowledge distillation is adopted to transfer quality features from the source domain. PL is utilized for domain stabilization through low-density separation between classes and entropy regularization effects. gkMMD is employed to reduce distribution differences between domain-independent features. Based on various techniques, we improve the performance of the proposed domain adaptation method by considering the distribution of robust domain-independent features.
To prove the proposed method, we used UK-DALE data and REDD as data. For data preprocessing, data such as training, verification, and testing were constructed by experimentally setting thresholds for distinguishing ON events in each appliance. Five methods of ablation study were performed for the performance test. Within the same dataset, domain adaptation improved the F1 score of the proposed method over the baseline by an average of 6.04%. Domain adaptation on different datasets improved the F1 score of the proposed method over the baseline by an average of 5.74%. While performance has not improved significantly for problems with much larger domain feature changes, maintaining existing performance alone is a great achievement.

Author Contributions

Formal analysis, C.-H.H.; Methodology, C.-H.H.; Writing—original draft, C.-H.H.; Writing—review & editing, C.-H.H. and S.-G.K.; Data curation, H.-E.L.; Validation, H.-E.L., Y.-J.K. and S.-G.K.; Visualization, H.-E.L.; Funding acquisition, Y.-J.K.; Supervision, S.-G.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean government. [22ZS1300, Research on High Performance Computing Technology to overcome limitations of AI processing].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

REDD and UK-DALE datasets can be found at http://redd.csail.mit.edu/ (accessed on 30 May 2022) and https://ukerc.rl.ac.uk/DC (accessed on 30 May 2022), respectively.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gherheș, V.; Fărcașiu, M.A. Sustainable Behavior among Romanian Students: A Perspective on Electricity Consumption in Households. Sustainability 2021, 13, 9357. [Google Scholar] [CrossRef]
  2. Somchai, B.; Boonyang, P. Non-intrusive appliances load monitoring (nilm) for energy conservation in household with low sampling rate. Procedia Comput. Sci. 2016, 86, 172–175. [Google Scholar]
  3. Nur Farahin, E.; Md Pauzi, A.; Yusri, H.M. RETRACTED: A review disaggregation method in Non-intrusive Appliance Load Monitoring. Renew. Sustain. Energy Rev. 2016, 66, 163–173. [Google Scholar]
  4. Shikha, S.; Angshul, M. Deep sparse coding for non–intrusive load monitoring. IEEE Trans. Smart Grid 2017, 9, 4669–4678. [Google Scholar]
  5. Cominola, A.; Giuliani, M.; Piga, D.; Castelletti, A.; Rizzoli, A.E. A hybrid signature-based iterative disaggregation algorithm for non-intrusive load monitoring. Appl. Energy 2017, 185, 331–344. [Google Scholar] [CrossRef]
  6. Shi, X.; Ming, H.; Shakkottai, S.; Xie, L.; Yao, J. Nonintrusive load monitoring in residential households with low-resolution data. Appl. Energy 2019, 252, 113283. [Google Scholar] [CrossRef]
  7. Georgia, E.; Lina, S.; Vladimir, S. Power Disaggregation of Domestic Smart Meter Readings Using Dynamic Time warping. In Proceedings of the 2014 6th International Symposium on Communications, Control and Signal Processing (ISCCSP), Athens, Greece, 21–23 May 2014; IEEE: Manhattan, NY, USA, 2014; pp. 36–39. [Google Scholar]
  8. Yu-Hsiu, L.; Men-Shen, T. Non-intrusive load monitoring by novel neuro-fuzzy classification considering uncertainties. IEEE Trans. Smart Grid 2014, 5, 2376–2384. [Google Scholar]
  9. Kanghang, H.; He, K.; Stankovic, L.; Liao, J.; Stankovic, V. Non-intrusive load disaggregation using graph signal processing. IEEE Trans. Smart Grid 2016, 9, 1739–1747. [Google Scholar]
  10. Hart, G.W. Nonintrusive appliance load monitoring. Proc. IEEE 1992, 80, 1870–1891. [Google Scholar] [CrossRef]
  11. Yang, Y.; Zhong, J.; Li, W.; Gulliver, T.A.; Li, S. Semisupervised multilabel deep learning based nonintrusive load monitoring in smart grids. IEEE Trans. Ind. Inform. 2019, 16, 6892–6902. [Google Scholar] [CrossRef]
  12. Sagar, V.; Shikha, S.; Angshul, M. Multi-label LSTM autoencoder for non-intrusive appliance load monitoring. Electr. Power Syst. Res. 2021, 199, 107414. [Google Scholar]
  13. Hyeontaek, H.; Sanggil, K. Nonintrusive Load Monitoring using a LSTM with Feedback Structure. IEEE Trans. Instrum. Meas. 2022, 71, 1–11. [Google Scholar]
  14. Da Silva Nolasco, L.; Lazzaretti, A.E.; Mulinari, B.M. DeepDFML-NILM: A New CNN-Based Architecture for Detection, Feature Extraction and Multi-Label Classification in NILM Signals. IEEE Sens. J. 2021, 22, 501–509. [Google Scholar] [CrossRef]
  15. Christoforos, N.; Dimitris, V. On time series representations for multi-label NILM. Neural Comput. Appl. 2020, 32, 17275–17290. [Google Scholar]
  16. Patrick, H.; Calatroni, A.; Rumsch, A.; Paice, A. Review on deep neural networks applied to low-frequency nilm. Energies 2021, 14, 2390. [Google Scholar]
  17. Kong, W.; Dong, Z.Y.; Hill, D.J.; Luo, F.; Xu, Y. Improving nonintrusive load monitoring efficiency via a hybrid programing method. IEEE Trans. Ind. Inform. 2016, 12, 2148–2157. [Google Scholar] [CrossRef]
  18. Basu, K.; Debusschere, V.; Douzal-Chouakria, A.; Bacha, S. Time series distance-based methods for non-intrusive load monitoring in residential buildings. Energy Build. 2015, 96, 109–117. [Google Scholar] [CrossRef]
  19. Yaroslav, G.; Lempitsky, V. Unsupervised Domain Adaptation by Backpropagation. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015; pp. 1180–1189. [Google Scholar]
  20. Long, M.; Zhu, H.; Wang, J.; Jordan, M.I. Unsupervised domain adaptation with residual transfer networks. Adv. Neural Inf. Processing Syst. 2016, 29, 136–144. [Google Scholar]
  21. Liu, Y.; Zhong, L.; Qiu, J.; Lu, J.; Wang, W. Unsupervised domain adaptation for nonintrusive load monitoring via adversarial and joint adaptation network. IEEE Trans. Ind. Inform. 2021, 18, 266–277. [Google Scholar] [CrossRef]
  22. Lin, J.; Ma, J.; Zhu, J.; Liang, H. Deep Domain Adaptation for Non-Intrusive Load Monitoring Based on a Knowledge Transfer Learning Network. IEEE Trans. Smart Grid 2021, 13, 280–292. [Google Scholar] [CrossRef]
  23. Suzuki, K.; Inagaki, S.; Suzuki, T.; Nakamura, H.; Ito, K. Nonintrusive Appliance Load Monitoring Based on Integer Programming. In Proceedings of the 2008 SICE Annual Conference, Tokyo, Japan, 20–22 August 2008; IEEE: Manhattan, NY, USA, 2008; pp. 2742–2747. [Google Scholar]
  24. Michael, B.; Jürgen, V. Nonintrusive appliance load monitoring based on an optical sensor. In Proceedings of the 2003 IEEE Bologna Power Tech Conference Proceedings, Bologna, Italy, 23–26 June 2003; IEEE: Manhattan, NY, USA, 2003; Volume 4, p. 8. [Google Scholar]
  25. Arend, B.J.; Xiaohua, X.; Jiangfeng, Z. Active Power Residential Non-Intrusive Appliance Load Monitoring System. In Proceedings of the AFRICON 2009, Nairobi, Kenya, 23–25 September 2009; IEEE: Manhattan, NY, USA, 2009; pp. 1–6. [Google Scholar]
  26. Pan, S.J.; Tsang, I.W.; Kwok, J.T.; Yang, Q. Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 2010, 22, 199–210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Mei, W.; Weihong, D. Deep visual domain adaptation: A survey. Neurocomputing 2018, 312, 135–153. [Google Scholar]
  28. Isobe, T.; Jia, X.; Chen, S.; He, J.; Shi, Y.; Liu, J.; Lu, H.; Wang, S. Multi-Target Domain Adaptation with Collaborative Consistency Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8187–8196. [Google Scholar]
  29. Yuang, L.; Wei, Z.; Jun, W. Source-Free Domain Adaptation for Semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 1215–1224. [Google Scholar]
  30. Guoqiang, W.; Lan, C.; Zeng, W.; Chen, Z. Metaalign: Coordinating Domain Alignment and Classification for Unsupervised Domain Adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 16643–16653. [Google Scholar]
  31. Zechen, B.; Wang, Z.; Wang, J.; Hu, D.; Ding, E. Unsupervised Multi-Source Domain Adaptation for Person Re-Identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 12914–12923. [Google Scholar]
  32. Jingjing, L.; Jing, M.; Su, H.; Lu, K.; Zhu, L.; Shen, H.T. Faster domain adaptation networks. IEEE Trans. Knowl. Data Eng. 2021, 1. [Google Scholar] [CrossRef]
  33. Dongdong, W.; Han, T.; Chu, F.; Zuo, M.J. Weighted domain adaptation networks for machinery fault diagnosis. Mech. Syst. Signal Processing 2021, 158, 107744. [Google Scholar]
  34. Tzeng, E.; Hoffman, J.; Zhang, N.; Saenko, K.; Darrell, T. Deep domain confusion: Maximizing for domain invariance. arXiv 2014, arXiv:1412.3474. [Google Scholar]
  35. Hao, W.; Wang, W.; Zhang, C.; Xu, F. Cross-Domain Metric Learning Based on Information Theory. In Proceedings of the AAAI Conference on Artificial Intelligence, Québec City, QC, Canada, 27–31 July 2014. [Google Scholar]
  36. Juntao, H.; Hongsheng, Q. Unsupervised Domain Adaptation with Multi-kernel MMD. In Proceedings of the 2021 40th Chinese Control Conference (CCC), Shanghai, China, 26–28 July 2021; IEEE: Manhattan, NY, USA, 2021; pp. 8576–8581. [Google Scholar]
  37. Zhang, W.; Zhang, X.; Lan, L.; Luo, Z. Maximum mean and covariance discrepancy for unsupervised domain adaptation. Neural Processing Lett. 2020, 51, 347–366. [Google Scholar] [CrossRef]
  38. Wen, Z.; Wu, W. Discriminative Joint Probability Maximum Mean Discrepancy (DJP-MMD) for Domain Adaptation. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; IEEE: Manhattan, NY, USA, 2020; pp. 1–8. [Google Scholar]
  39. Mingsheng, L.; Zhu, H.; Wang, J.; Jordan, M.I. Deep Transfer Learning with Joint Adaptation Networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 2208–2217. [Google Scholar]
  40. Wan, N.; Zhang, C.; Chen, Q.; Li, H.; Liu, X.; Wei, X. MDDA: A Multi-scene Recognition Model with Multi-dimensional Domain Adaptation. In Proceedings of the 2021 IEEE 4th International Conference on Electronics Technology (ICET), Chengdu, China, 7–10 May 2021; IEEE: Manhattan, NY, USA, 2021; pp. 1245–1250. [Google Scholar]
  41. Wang, L.; Mao, S.; Wilamowski, B.M.; Nelms, R.M. Pre-trained models for non-intrusive appliance load monitoring. IEEE Trans. Green Commun. Netw. 2021, 6, 56–68. [Google Scholar] [CrossRef]
  42. Shaojie, B.; Zico, K.J.; Koltun, V.K. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
  43. Geoffrey, H.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network. arXiv 2015, arXiv:1503.02531. [Google Scholar]
  44. Xin, Y.; Chaofeng, H.; Lifeng, S. Two-Stream Federated Learning: Reduce the Communication Costs. In Proceedings of the 2018 IEEE Visual Communications and Image Processing (VCIP), Taichung, Taiwan, 9–12 December 2018; IEEE: Manhattan, NY, USA, 2018; pp. 1–4. [Google Scholar]
  45. Kelly, J.K.; Knottenbelt, W. The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes. Sci. Data 2015, 2, 150007. [Google Scholar] [CrossRef] [Green Version]
  46. Zico, K.J.; Johnson, M.J. Redd: A public data set for energy disaggregation research. In Proceedings of the Workshop on Data Mining Applications in Sustainability (SIGKDD), San Diego, CA, USA, 21 August 2011; pp. 59–62. [Google Scholar]
  47. Linge, S.; Langtangen, H.P. Programming for Computations-Python: A Gentle Introduction to Numerical Simulations with Python 3.6; Springer Nature: Cham, Switzerland, 2020. [Google Scholar]
  48. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Processing Syst. 2019, 32, 8024–8035. [Google Scholar]
Figure 1. A detailed overall configuration diagram of the proposed semi-supervised domain adaptation for multi-label classification on nonintrusive load monitoring.
Figure 1. A detailed overall configuration diagram of the proposed semi-supervised domain adaptation for multi-label classification on nonintrusive load monitoring.
Sensors 22 05838 g001
Figure 2. Step-by-step flowchart of the proposed method.
Figure 2. Step-by-step flowchart of the proposed method.
Sensors 22 05838 g002
Figure 3. Power Usage and ON Thresholds for House 1 and House 2 in the UK-DALE dataset.
Figure 3. Power Usage and ON Thresholds for House 1 and House 2 in the UK-DALE dataset.
Sensors 22 05838 g003
Figure 4. Power usage and ON thresholds for House 1 and House 3 in the REDD dataset.
Figure 4. Power usage and ON thresholds for House 1 and House 3 in the REDD dataset.
Sensors 22 05838 g004
Table 1. ON threshold and the number of ON events in UK-DALE and REDD datasets.
Table 1. ON threshold and the number of ON events in UK-DALE and REDD datasets.
UK-DALEREDD
House 1House 2House 1House 3
ApplianceThresholdThe Number of ON EventThresholdThe Number of ON EventThresholdThe Number of ON EventThresholdThe Number of ON Event
DW2000443118003236100067126502934
FG2502441400529140029443503344
KT2200449520001694----
MV14001242120042181200480916001327
WM18004980150015242500479622005764
Table 2. Training parameters.
Table 2. Training parameters.
Parameter DescriptionValue
Number of TCN blocks8 (TN)
5 (SN)
Number of filters in each TCN block128 (TN)
64 (SN)
Filter size3
Number of fully connected layers5 (TN)
3 (SN)
2 (Domain Classifier)
Dilation factor 2 i for   block   i
Activation functionReLU
Dropout probability0.1
Number of maximum epochs200
Number of minimum early stopping epochs4
Mini-batch size512
Learning rate3 × 10−3
Table 3. F1 score comparison of domain adaptation within the same dataset.
Table 3. F1 score comparison of domain adaptation within the same dataset.
ApplianceMethodUK-DALEREDD
( U 1 U 2 ) ( U 2 U 1 ) ( R 1 R 3 ) ( R 3 R 1 )
DWBaseline0.7810.805
TCN-DA0.8320.827
gkMMD-DA0.7780.793
TS-DA0.8120.826
PL-DA0.7870.811
Ours0.8220.832
Improvement5.25%3.35%
FGBaseline0.8330.8340.8170.818
TCN-DA0.8420.8410.8290.840
gkMMD-DA0.8370.8360.8190.819
TS-DA0.8500.8530.8240.827
PL-DA0.8340.8450.8180.819
Ours0.8750.8720.8430.852
Improvement5.04%4.56%3.18%4.16%
KTBaseline0.7610.832
TCN-DA0.8110.839
gkMMD-DA0.7530.820
TS-DA0.8070.835
PL-DA0.7700.833
Ours0.8170.868
Improvement7.36%4.33%
MVBaseline0.7420.7910.7930.790
TCN-DA0.7510.7980.8060.721
gkMMD-DA0.7460.7950.7970.774
TS-DA0.7530.8030.8040.798
PL-DA0.7440.7960.7940.793
Ours0.7740.8120.8140.818
Improvement4.31%2.65%2.65%3.54%
WMBaseline0.6150.6110.8410.782
TCN-DA0.7250.7080.8440.799
gkMMD-DA0.6230.6250.8420.786
TS-DA0.6680.6530.8320.783
PL-DA0.6230.6150.8430.783
Ours0.7360.7130.8700.832
Improvement19.67%16.69%3.45%6.39%
Table 4. F1 score comparison of TCN + gkMMD domain adaptation within the same dataset.
Table 4. F1 score comparison of TCN + gkMMD domain adaptation within the same dataset.
ApplianceUK-DALEREDD
( U 1 U 2 ) ( U 2 U 1 ) ( R 1 R 3 ) ( R 3 R 1 )
DW0.8230.828
FG0.8570.8540.8340.847
KT0.8130.841
MV0.7620.8050.8090.764
WM0.7300.7090.8520.815
Table 5. F1 score comparison of domain adaptation between different datasets.
Table 5. F1 score comparison of domain adaptation between different datasets.
ApplianceMethod UK - DALE   REDD REDD     UK - DALE
DWBaseline0.7410.712
TCN-DA0.7790.737
gkMMD-DA0.7360.713
TS-DA0.7700.745
PL-DA0.7470.714
Ours0.7780.747
Improvement4.99%4.92%
FGBaseline0.7860.764
TCN-DA0.7940.787
gkMMD-DA0.7870.769
TS-DA0.8000.772
PL-DA0.7870.770
Ours0.8210.797
Improvement4.45%4.32%
MVBaseline0.7190.739
TCN-DA0.7260.716
gkMMD-DA0.7190.746
TS-DA0.7290.749
PL-DA0.7170.743
Ours0.7420.763
Improvement3.2%3.25%
WMBaseline0.5630.758
TCN-DA0.6690.773
gkMMD-DA0.5730.766
TS-DA0.6100.758
PL-DA0.5680.763
Ours0.6720.769
Improvement19.36%1.45%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hur, C.-H.; Lee, H.-E.; Kim, Y.-J.; Kang, S.-G. Semi-Supervised Domain Adaptation for Multi-Label Classification on Nonintrusive Load Monitoring. Sensors 2022, 22, 5838. https://doi.org/10.3390/s22155838

AMA Style

Hur C-H, Lee H-E, Kim Y-J, Kang S-G. Semi-Supervised Domain Adaptation for Multi-Label Classification on Nonintrusive Load Monitoring. Sensors. 2022; 22(15):5838. https://doi.org/10.3390/s22155838

Chicago/Turabian Style

Hur, Cheong-Hwan, Han-Eum Lee, Young-Joo Kim, and Sang-Gil Kang. 2022. "Semi-Supervised Domain Adaptation for Multi-Label Classification on Nonintrusive Load Monitoring" Sensors 22, no. 15: 5838. https://doi.org/10.3390/s22155838

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop