Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Research on Multi-Alternatives Problem of Finite Element Model Updating Based on IAFSA and Kriging Model
Previous Article in Journal
Implementation and Performance Evaluation of Integrated Wireless MultiSensor Module for Aseptic Incubator of Cordyceps militaris
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Transfer Learning for Time Series Data Based on Sensor Modality Classification

1
Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany
2
Department of Informatics, Kindai University, 3-4-1 Kowakae, Higashiosaka City, Osaka 577-8502, Japan
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(15), 4271; https://doi.org/10.3390/s20154271
Submission received: 26 June 2020 / Revised: 28 July 2020 / Accepted: 28 July 2020 / Published: 31 July 2020
(This article belongs to the Section Wearables)
Figure 1
<p>An overview of our transfer learning method. (<b>a</b>) A labelled source dataset of single-channel sequences <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="script">X</mi> <mi>S</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="script">Y</mi> <mi>S</mi> </msub> <mo>)</mo> </mrow> </semantics></math> is created by collecting segments <math display="inline"><semantics> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> of length <span class="html-italic">L</span> from <span class="html-italic">M</span> datasets and attributing them sensor modality labels <math display="inline"><semantics> <msubsup> <mi>y</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math>. <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="script">X</mi> <mi>S</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="script">Y</mi> <mi>S</mi> </msub> <mo>)</mo> </mrow> </semantics></math> is then used to train a sDNN that predicts the sensor modality of each segment. (<b>b</b>) A mDNN is built to learn the predictive target function <math display="inline"><semantics> <msub> <mi>f</mi> <mi>T</mi> </msub> </semantics></math>. The weights of the trained sDNN are transferred to the mDNN. The latter is then fine-tuned on the target domain using <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="script">X</mi> <mi>T</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="script">Y</mi> <mi>T</mi> </msub> <mo>)</mo> </mrow> </semantics></math>.</p> ">
Figure 2
<p>mDNN used for the learning of <math display="inline"><semantics> <msub> <mi>f</mi> <mi>T</mi> </msub> </semantics></math> on the target domain. The input segments of the target dataset <math display="inline"><semantics> <msub> <mi mathvariant="script">X</mi> <mi>T</mi> </msub> </semantics></math> are sent through a batch normalisation layer. All sensor channels are then separated and processed by <span class="html-italic">S</span> branches with the same number and type of hidden layers as the sDNN trained on the source dataset <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="script">X</mi> <mi>S</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="script">Y</mi> <mi>S</mi> </msub> <mo>)</mo> </mrow> </semantics></math>. The outputs of the <span class="html-italic">S</span> branches are concatenated and sent through fully-connected and softmax layers for classification. The mDNN is fine-tuned using the target dataset <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="script">X</mi> <mi>T</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="script">Y</mi> <mi>T</mi> </msub> <mo>)</mo> </mrow> </semantics></math>.</p> ">
Figure 3
<p>Model used on the CogAge dataset. Each of the three mDNNs processes the smartphone (sp), smartwatch (sw) or smartglasses (sg) data. <math display="inline"><semantics> <msub> <mi>L</mi> <mo>*</mo> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mo>*</mo> </msub> </semantics></math> with <math display="inline"><semantics> <mrow> <mo>∗</mo> <mo>∈</mo> <mo>{</mo> <mi>s</mi> <mi>p</mi> <mo>,</mo> <mi>s</mi> <mi>w</mi> <mo>,</mo> <mi>s</mi> <mi>g</mi> <mo>}</mo> </mrow> </semantics></math> refer to the segment length and number of sensor channels, respectively. Outputs from the three mDNNs are concatenated and fed into fully-connected and softmax layers.</p> ">
Figure 4
<p>Flowchart of the three approaches tested on the CogAge dataset: TTO (no transfer), VAE-transfer, and CNN-transfer. The mDNN follows the architecture described in <a href="#sensors-20-04271-f003" class="html-fig">Figure 3</a>.</p> ">
Figure 5
<p>Layer differences <math display="inline"><semantics> <msup> <mi>D</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </msup> </semantics></math> for all layers of mDNNs trained using TTO and CNN-transfer for BBH classification on the CogAge dataset. Each bar corresponds to a layer and represents its difference between TTO and CNN-transfer. Layer differences are arranged in decreasing order. For each of them, we indicate if it was computed for a layer belonging to a branch processing smartphone, smartwatch, or smartglasses data. Layers not belonging to any branch (e.g., concatenation or fully-connected layers) are categorised as “other”.</p> ">
Figure 6
<p>Global channel-wise Jacobian scores <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi>k</mi> </msub> </semantics></math> for mDNNs trained by TTO (red) and CNN-transfer (blue). These scores are computed for BBH on the testing set of the CogAge dataset. <span class="html-italic">sp</span>, <span class="html-italic">sw</span> and <span class="html-italic">sg</span> refer to smartphone, smartwatch, and smartglasses, respectively.</p> ">
Versions Notes

Abstract

:
The scarcity of labelled time-series data can hinder a proper training of deep learning models. This is especially relevant for the growing field of ubiquitous computing, where data coming from wearable devices have to be analysed using pattern recognition techniques to provide meaningful applications. To address this problem, we propose a transfer learning method based on attributing sensor modality labels to a large amount of time-series data collected from various application fields. Using these data, our method firstly trains a Deep Neural Network (DNN) that can learn general characteristics of time-series data, then transfers it to another DNN designed to solve a specific target problem. In addition, we propose a general architecture that can adapt the transferred DNN regardless of the sensors used in the target field making our approach in particular suitable for multichannel data. We test our method for two ubiquitous computing problems—Human Activity Recognition (HAR) and Emotion Recognition (ER)—and compare it a baseline training the DNN without using transfer learning. For HAR, we also introduce a new dataset, Cognitive Village-MSBand (CogAge), which contains data for 61 atomic activities acquired from three wearable devices (smartphone, smartwatch, and smartglasses). Our results show that our transfer learning approach outperforms the baseline for both HAR and ER.

1. Introduction

The prevalence of wearable devices has simplified the collection of sensor data for ubiquitous and wearable computing applications over the past years. In such context, machine learning has become necessary to provide meaningful services by automatically recognising complex patterns in time-series data. Following the most common approach, an ubiquitous computing application—like Human Activity Recognition (HAR) or Emotion Recognition (ER)—is formulated into a classification problem. A classification model is built on a training dataset composed of sensor data labelled with their corresponding classes (e.g., activities and emotions for HAR and ER, respectively). The model is then used to estimate the class of test data whose actual class is unknown.
To build an accurate model, it is required to find an appropriately abstracted representation of data—called features—which would contain all the information relevant to the target classification problem. This process is referred to as feature extraction. In traditional approaches, features were heuristically engineered based on prior knowledge about sensor data in the target problem. They have however been progressively overshadowed by feature learning methods which learn useful features on data in a more automated way [1,2]. The most popular feature learning methods are based on deep learning, i.e., machine learning using Deep Neural Networks (DNNs). A DNN consists of an ensemble of artificial neurons organised in a layer-wise fashion. Each neuron is a simple nonlinear computational unit with internal parameters, weights, and biases. During the training of a DNN, these parameters are optimised so that the model can accurately categorise training data into their own classes. Past works have shown that neurons of a trained DNN encode specific features which are more effective than traditional human-crafted features [3]. The effectiveness of DNNs has been consistently verified over the past years for numerous wearable-computing applications, including HAR [1,2,4,5] and ER [6,7].
Using DNNs is however confronted to several difficulties in practice, such as lack of practical technique for the optimisation of hyper-parameters (e.g., neural activation function, number of layers, number of neurons per layer, etc.), requirements in high computational power to train complex models in a reasonable amount of time, etc. Among them, the major obstacle remains the need for a large quantity of labelled training data. A high diversity in the training data are required so that the classification model becomes robust to the intra-class variability which might be caused by many different factors. For HAR for instance, a way to execute a certain activity may significantly vary depending on persons, producing very different sensor data. Even the same person could produce intra-class variability by performing the same activity in different ways due to external factors (e.g., surrounding environment, positions of sensors, etc.).
A possible solution to alleviate the data scarcity problem is transfer learning, which refers to techniques that aim at extracting knowledge from a source domain, and using it to improve the learning of a model on a target domain [8]. Data from the source domain can partially compensate the scarcity of data on the target domain. In other words, by performing some specific task on the source domain, the model can learn information relevant to the target problem on “external” data. Deep transfer learning—which refers to transfer learning applied to DNNs—has in particular become widespread with the rise in popularity of DNNs. Typically, parameters (weights and biases) of a DNN pre-trained on a source domain are transferred to another compatible DNN on the target domain. Previous works have shown that the success of the gradient descent optimisation applied during the training of a DNN is heavily dependent on the initial values of its parameters [9]. Deep transfer learning is based on the assumption that, if the features learned on the source domain are also useful for the target domain, then the parameters of a DNN pre-trained on the source domain are also adequate initial parameters for a DNN on the target domain [10,11]. Once transferred, the target DNN is fine-tuned, i.e., retrained using the target data to adjust the transferred parameters to the problem on the target domain as needed.
While deep transfer learning has become standard for the image processing, it has not reached the same level of maturity when time-series data are involved for several reasons. Firstly, time-series data are rather scarce due to the high cost of the labelling task for a specific application. This results in a lack of very large-scale time-series dataset (like ImageNet for images). Secondly, the development of a transfer learning method working for any type of time-series data are confronted with the difficulty that data formats on the target domain can significantly vary depending on the application. Some sensors can for instance provide “sparse” time-series containing data points unevenly spaced in time indicating events, while others provide “non-sparse” time-series consisting of data values evenly spaced in time and sampled at high frequencies. Additionally, different applications of ubiquitous computing may use different numbers and types of sensors because of differences in the relevance of devices, their obtrusiveness or easiness to setup, etc. Those applications thus rely on data consisting of different numbers of channels, where we refer to a channel as one dimension of a sequence of sensor recordings. For instance, a temperature sensor provides a single channel sequence, while three-axis accelerometers record three channels, each indicating the acceleration on one axis, etc.
We propose a transfer learning method for time-series that leverages existing datasets to bypass the issue of data scarcity on the target domain. We carry out our studies using non-sparse time-series datasets because they are the most common type of data in ubiquitous computing applications. Transfer learning for images has shown that learning general image features on ImageNet led to successful transfer of information on various target domains. In a similar way, we aim at learning general features for non-sparse time-series data which could be re-purposed to various target domains. We hypothesise that learning features related to the type of time-series data could achieve this goal. We therefore propose to use sensor modalities as labels, which are commonly available. We consider that two sensors are part of the same modality group if they measure the same type of measurement, and perform it in similar ways. For instance, similar devices acquiring acceleration placed at different locations are part of the same sensor modality group; acceleration acquired from two different types of devices are considered as different sensor modalities (measurement processed in different ways); and acceleration and EEG are considered as different modalities (different types of measurements).
We also design our method so that it can be applied to other target domains involving any number of channels. Our method firstly decomposes data on a source domain into single-channel data, and trains a DNN called single-channel DNN (sDNN) for sensor-modality classification. In other words, this DNN takes single-channel data as input and predicts their sensor modalities. Then, a model called multichannel DNN (mDNN) [12] is built by replicating and fine-tuning the sDNN for each of channels on the target domain. This mDNN performs recognition on the target domain by fusing outputs from all channels.
To sum up, we propose a novel, general deep transfer learning method for time-series which firstly trains an sDNN as a sensor-modality classification model using single-channel data in a source domain, and then constructs an mDNN on the target domain by replicating the sDNN on each of the target data channels. Contrary to existing time-series transfer learning methods [13,14] which focused on single-channel data analysis, our approach can be applied to data with a different number of channels. Furthermore, we introduce a new wearable-based HAR dataset, called Cognitive Village-MSBand dataset (CogAge), for the recognition of 61 activities. We carry out experiments for both HAR and ER to test our method using the CogAge and DEAP [15] datasets, respectively. Our results show that our transfer learning method consistently achieves performances at least as good as the baseline not using any transfer on both of CogAge and DEAP datasets. All research contents (source and target datasets, codes, trained DNN models) are made available to help other researchers reproduce our findings (research contents available on the following repository: https://www.info.kindai.ac.jp/~shirahama/transfer/).
This paper is organised as follows: Section 2 presents an overview of related work tackling the problem of deep transfer learning. Section 3 details our transfer learning approach. Section 4 presents the experiments carried out on the CogAge dataset for HAR, while Section 5 does the same for ER. Section 6 presents a detailed analysis using the findings of the two batches of experiments. Finally, Section 7 concludes the paper and presents potential future directions.

2. Related Work

We firstly focus on the image processing field where transfer learning is intensively explored because our method is inspired from works carried out in this field. We then perform an overview of transfer learning methods developed for time-series, and compare our method to them and clarify its novelties. Finally, we perform a short review of existing works related to our studies in HAR and ER.

2.1. Deep Transfer Learning for Images

Most general deep transfer learning methods with proven effectiveness have been developed for image modalities, due to the availability of large datasets like ImageNet (more than 14 million images labelled with over 20 , 000 different categories). Powerful feature extractor models like AlexNet [16], VGG-net [17] and ResNet [18] were trained on subsets of the ImageNet dataset in the frame of the ImageNet Large Scale Visual Recognition Challenges (ILSVRC) and are nowadays regularly re-used and fine-tuned for more specific applications [11]. The first studies hinting at the benefits of transfer learning emerged approximately at the same time. In [3], a key aspect of the behaviour of Convolutional Neural Networks (CNNs) was highlighted by showing that each neuron of convolutional layers encodes a specific feature, whose specificity increases with the depth of the layer using a variant of AlexNet [16]. The authors also analysed the generality of features learned by the model trained on the source domain (ImageNet) by checking its transferability on three smaller target domains. The major performance improvements showed the potential of parameter-based transfers for DNNs. In a similar fashion, Donahue et al. [19] managed to show how AlexNet could improve the performances of various target problems such as domain adaptation, object recognition, sub-category, and scene recognition. The authors of [20] trained a variant of AlexNet for image classification, and transferred it for object detection and localisation tasks, obtaining state-of-the-art results in both setups. In [21], the authors extracted features from warped regions of images, by pre-training a variant of AlexNet on a subset of ImageNet. It was then fine-tuned using the warped images as inputs for image classification on two different target domains (PASCAL VOC and a different subset of ImageNet). The transferred model was able to significantly outperform the previously best solutions on both target domains. Similarly, Oquab et al. [22] presented a study in which the layers of AlexNet trained discriminatively on ImageNet were transferred to a DNN model designed for object and action classification on the PASCAL VOC dataset. In [10], researchers analysed the impact of different transfer learning parameters for AlexNet such as number of transferred layers, using fine-tuning or not, using different subsets of ImageNet as source and target. They showed that the target performance drops when only transferring deeper layers (which were shown to encode features more specific to the source problem [3]), and how important fine-tuning on the target domain was. In addition, it was demonstrated that the transfer learning process could boost the generalisation capacity of the network compared to not using it.
More recently, diverse attempts to further improve the efficiency of transfer learning by changing different parameters have been made. In [23], the authors present a method based on information theory to automatically find the most suitable source domain to perform transfer given a target domain with a specific task. Assuming that different CNNs with similar architectures have each been trained on a source domain, they propose a ranking metric called “transferability” by computing the Mutual Information between the target labels and the features of each of the CNNs. The transferability can be used to estimate how much a specific source domain can reduce the uncertainty in predicting the test labels. Experiments showed that the top ranked CNNs in terms of transferability led to the best performances after transfer and fine-tuning on the target domain. In [24], a method to improve the fine-tuning procedure on the target domain is presented. Assuming that a pre-trained model is available on a source domain (e.g., ImageNet), the authors propose to jointly train a “policy network” using a Gumbel Softmax distribution and a DNN for the target classification task. For each testing image and layer of the target DNN, the policy network is used to determine whether the weights of the layer should be frozen or fine-tuned using the image. Experiments showed that the proposed adaptive fine-tuning approach led to better results than other state-of-the-art fine-tuning and regularisation techniques. On a similar idea, Li et al. [25] investigate the effectiveness of different regularisation approaches whose aim is to keep the weights which are fine-tuned on the target domain as close as possible to those learned on the source domain. A baseline consisting of a regular fine-tuning of the target DNN was also tested. Experiments for image classification and segmentation showed that all regularisation approaches led to better performances than the baseline.
It can be noted that all aforementioned works are based on supervised pre-training of one or several DNN models on a source domain. Unsupervised pre-training using unlabelled data has also been attempted for image modalities [11,26], but failed to yield performances as good as supervised pre-training regardless of the quantity of available unlabelled data. This highlights the superiority of using labelled data on the source domain, and motivates our choice to define a supervised pre-training using sensor modalities as labels.

2.2. Deep Transfer Learning for Time-Series

Transfer learning techniques have been much less explored for time-series data because of the scarcity of data in the ubiquitous computing field, and the absence of a large-scale labelled dataset like ImageNet. Nevertheless, past works have attempted to tackle this issue with different degrees of generality. On a general level, [27] defines different types of transfers that can be applied to wearable-based HAR. It introduces the concepts of instance transfer that re-uses data in the source domain to train a model in the target domain, feature representation transfer that finds a feature mapping between the source and target domains, and parameter transfer that transfers parameters from a model trained on the source domain to model for the target domain. On a more specific level, several works presented results of parameter transfer for HAR. In [28], the results in several scenarios of parameter transfer such as transfer between subjects, datasets, sensor localisation, or modalities were presented. All transferred models were tested against a baseline that “regularly” trains the model on the target domain only. Despite poor relative performances of the transferred models compared to the baseline, the study highlighted some interesting phenomena. The performances of transfers were sensibly better when parameters of the lower layers were transferred. In [29], a transfer approach for CNN when labelled target data are scarce but labelled source data are available was presented. It firstly trains a CNN using labelled data on the source domain and defines a CNN with similar architecture on the target domain. The target CNN is then trained on unlabelled data to minimise the distance between its parameters and the ones of the source CNN. It, however, only works under the assumption that the set of activities on the source and target domains is the same. In [30], an iterative co-training approach using classification models trained on labelled source data to attribute pseudo-labels to unlabelled target data was presented. It works under the assumption that source and target domains contain the same labels. A transformation which minimises the maximum mean discrepancy between labelled and pseudo labelled examples is found. Source and target data are then projected into a common space using the transformation, and classifiers are trained on the projected data to attribute more reliable labels.
However, the scope of the above-mentioned studies [28,29,30] is limited to a specific application field (wearable-based HAR), and by strict conditions on the similarity between the source and target domains (e.g., same set of labels, same type of data, etc.). Compared to this, our transfer learning method can be generally applied to any application field using time-series data. This generalisation is demonstrated by targeting wearable-based HAR in Section 4 and ER in Section 5. In addition, our method does not require source and target domains to be characterised by the same label or data types.
To our best knowledge, only two past works proposed a general transfer learning method that is potentially usable for different ubiquitous computing applications. In [14], a Recurrent Neural Network (RNN) was trained using data from the UCR Time Series Classification Archive (UCRTSCA) [31] that consists of 85 small-scale univariate time-series datasets covering a wide range of sensor modalities, such as accelerometer data, energy demand, chemical concentration in water, etc. The RNN composed of an encoder and decoder was trained to reproduce its input on its output layer using a subset of 24 datasets of the UCRTSCA (source domain). After this pre-training step, the encoder was used as a feature extractor for a Support Vector Machine (SVM) fine-tuned on each of 30 other datasets of the UCRTSCA (target domain). The experimental results indicated that data on source domains not necessarily related to the target domain were still useful for achieving state-of-the-art results. In [13], a method to compute the similarity between source and target datasets to determine the most suitable dataset for transfer was proposed. It assumes that one labelled target and several labelled source datasets are available. For each dataset, the method firstly computes the average of sequences for each class. The barycentre of all class averages is then computed to yield a “characteristic sequence” of the dataset. The similarity between two datasets is computed using the Dynamic Time Warping (DTW) distance between their respective characteristic sequences. The source dataset with the lowest distance is then chosen and used to train a DNN. Its weights are finally transferred on the target domain for fine-tuning. Experiments carried out on the 85 datasets of the UCRTSCA showed that the transfer yielded better classification performances when the similarity between source and target was higher.
However, the methods in [13,14] remain limited to the case of processing single-channel sequences since their experiments were both carried out on the UCRTSCA, and do not present how to generalise it to multichannel sequences. In contrast, we propose a multichannel DNN architecture that can be widely used for multichannel sequences in different ubiquitous computing applications.

2.3. Sensor-Based HAR and ER

Sensor-based HAR: HAR is one of the most popular research topics of ubiquitous computing due cheap and widespread motion sensors such as accelerometers and gyroscopes, the relative simplicity to acquire labelled data compared to other applications, and its potential applications in several domains such as assistive living, surveillance, improvement of quality of life, or gaming [1,2]. We mainly focus on deep-learning used for sensor-based HAR (i.e., HAR using low-level readings under the format of time-series provided by wearable sensors) as opposed to video-based HAR (i.e., relying on vision sensors) [27,32].
Growing evidence from past studies on sensor-based HAR has shown that DNNs could successfully be used for sensor-based HAR using continuous time-series data acquired from wearable sensors [1,4,5], and outperformed traditional approaches relying on manual crafting of features [2]. Recent works have in particular highlighted the importance of convolutional-based DNN architectures [1,2,4], recurrent architectures involving LSTM cells [5], or hybrid models combining both convolutional and LSTM layers [2,4,5] in obtaining state-of-the-art performances on various HAR benchmarks.
Sensor-based ER: ER is an important component of Affective Computing which designates the study of techniques teaching machines to automatically recognise the human effect to enhance computer–human interactions [33]. This goal is usually reached by using machine learning techniques on data acquired by sensors and labelled with emotion annotations. ER is approached differently depending on the type of sensor modality used to acquire the data. A large part of the ER literature over the past decades has focused on the analysis of facial expressions in RGB images and/or videos, or speech in audio signals [34]. However, audiovisual sensor modalities are not always available due to difficulties to setup properly the cameras in real-life, concerns about privacy or use-case scenarios where parts of the subjects’ faces are hidden [35]. As a consequence, interest in sensor-based ER—the study of ER using wearable sensors recording physiological signals (e.g., Electro-encephalography (EEG), Electrodermal Activity (EDA), Electroocculography (EOG), Electromyography (EMG), etc.)—has grown with the increasing availability of wearable devices.
One of the first proposals for a sensor-based ER system can be found in [36]. In this study, the authors proposed to use the EEG channels of the DEAP dataset for the two-class classification problems of low versus high arousal/valence/dominance/liking using the 1-minute data records. Hand-crafted features were firstly computed on the power spectrum of overlapping segments of the original 1-min signals, then projected using a non-parametric model based on the k-Nearest Neighbour (kNN) approach to provide a feature vector for the 1-minute record. A 1NN classifier provided classification performances in a subject-independent setup. With the increasing popularity of deep-learning, researchers have also tried to apply DNNs to sensor-based ER. The authors of [37] proposed an approach computing hand-crafted features on the power spectrum of the DEAP EEG signals and sending them to a stacked autoencoder based on MLP. Classification results for three classes of arousal/valence (low/medium/high) were provided in a subject-independent context. In [7], a bi-modal deep autoencoder approach was proposed to learn features from EEG and EOG signals in an unsupervised way and provide classification results in a subject-dependent context on both DEAP and SEED datasets. In [38], a residual multimodal LSTM architecture using one residual LSTM network to learn features from each input sensor channel was proposed. Classification results for the binary classification of arousal and valence on the DEAP dataset in a subject-dependent context were provided.
It should be noted that all aforementioned works in Section 2.3 have focused on sensor-based HAR or ER without transfer learning. In contrast, we propose a time-series transfer learning approach and test its applicability in both HAR and ER contexts.

3. Methodology Description

Using the notations of [8,27], we define a labelled domain dataset D as a combination of two components: one set of data instances X and a vector of associated labels Y . A task T is defined as the association of Y with a predictive functionf to be learned from the labelled data. The source and target domains datasets are referred to as D S = { X S , Y S } and D T = { X T , Y T } , while the source and target tasks are denoted by T S = { Y S , f S } and T T = { Y T , f T } , respectively. We assume that T T and D T —which respectively represent the target ubiquitous computing problem to solve and its associated labelled dataset—are available.
We propose a deep transfer learning strategy based on transferring DNN weights learned on a sensor-modality classification problem on D S to another DNN trained to solve T T on D T . Our method—illustrated in Figure 1—belongs to the category of inductive transfers, since the source and target tasks are different ( T S T T ). It consists of the following steps:
1.
Definition of D S and T S : X S is firstly built by considering M multichannel time-series datasets. Every multichannel sequence in the j t h dataset ( 1 j M ) is decomposed into individual channels, each of which is divided into segments of length L using a sliding window approach. The segments are aggregated to form the source dataset X S defined in Equation (1):
X S = j = 1 M { x i ( j ) R L | 1 i N j }
where x i ( j ) refers to the i t h segment of the j t h source dataset, and N j is the total number of segments obtained from the j t h source dataset. In other words, X S is the union of all segments extracted from the M source datasets. The source task T S is defined as the classification of sensor modalities on D S . Sensor modality labels Y S are defined by the following Equation (2):
Y S = j = 1 M { y i ( j ) { 1 , 2 , , C S } | 1 i N j }
where C S is the number of sensor modalities (i.e., classes) of the source domain, and y i ( j ) indicates the sensor modality of x i ( j ) X S . f S is the function which attributes each x i ( j ) to its corresponding sensor modality y i ( j ) .
2.
Learning of f S : A single-channel DNN (sDNN) is used to learn f S , as shown in Figure 1a. For the sDNN architecture, a batch normalisation layer used to perform a regularisation on the segments in X S to address the issue of the heterogeneity of the source data. Assuming the sDNN contains H N * hidden layers, we denote the weight matrix and bias vector of the k t h layer ( 1 k H ) as W k and b k , respectively. Finally, a softmax layer with C S neurons is added, with each neuron of the layer outputting a value which is an estimation of probability to its corresponding class. This way, the sDNN can classify the segments of X S using the labels Y S .
3.
Initialisation of a multichannel DNN (mDNN): A mDNN is defined to learn f T , as shown in Figure 2. It is trained using X T which contains multichannel segments X R L × S , with S being the number of channels of the target dataset and Y T which contains associated labels Y { 1 , 2 , , C T } with C T being the number of classes of the target problem. For the mDNN architecture, a batch normalisation layer is applied to the segments to perform an operation akin to a standard normalisation on the input of the network. The S sensor channels are then separated. The s t h sensor channel ( 1 s S ) is processed by an ensemble of hidden layers of the same number and type as the hidden layers of the sDNN. We refer to this ensemble of layers as a branch of the mDNN, as depicted in Figure 2. The output of each branch is then concatenated and connected to fully-connected layers. A softmax layer with C T neurons is then added to output class probabilities for the C T target classes.
4.
Transfer of weights from the sDNN to the mDNN: The weights W k and biases b k of the H hidden layers of the sDNN learned on { D S , T S } (not including batch normalisation and softmax layers) are transferred to the branches of the mDNN, as shown in Figure 2. In other words, the k t h layer of the s t h branch (for 1 k H and 1 s S ) has its weight and bias matrices W k ( s ) and b k ( s ) initialised as W k and b k , respectively.
5.
Learning of f T : The mDNN is fined-tuned using ( X T , Y T ) to learn f T , which is the predictive function for the target ubiquitous computing problem.
In our experiments, we used CNNs as sDNNs and the branches of a mDNN because of their good performances for time-series classification in diverse application fields of ubiquitous computing [13]. We therefore refer to our transfer approach from now on as CNN-transfer. We also tested fully-connected and recurrent layers with Long-Short-Term-Memory (LSTM) cells for hidden layers of the sDNNs. However, both of them ended up performing worse than convolutional layers in all configurations. Those results are consistent with past works which showed that CNNs are better feature extractors than fully-connected or LSTM networks in a time-series classification context [39]. For LSTM-based architectures in particular, finding a properly performing baseline architecture (i.e., not using any transfer) on the target datasets ended up being impractical. The high number of LSTM parameters and the large size of our multichannel architecture limited the complexity of the tested mDNN. In addition, using multichannel data segments with long temporal length significantly extended the training time of LSTM models (based on the backpropagation-through-time algorithm) compared to CNN-based approaches (even in configurations where simple LSTM architectures were tested), and increased the likelihood to overfit. Both phenomena were already highlighted in past HAR literature [5] and comforted our decision to use CNNs in our experiments. Results obtained by the LSTM architectures we tested are uploaded on our repository (link provided in the Supplementary Materials).
Four datasets taken from the UCI machine learning repository [40] were used in our study to build the source domain, covering 16 different sensor modalities in total: OPPORTUNITY [41] (accelerometers and IMUs data for Activities of Daily Life recognition), gas-mixture [42] (gas concentration and conductance chemical sensor readings data), EEG-eye-state [43] (ElectroEncophaloGraphy (EEG) data for open/close eye recognition) and energy-appliance [44] (data from a low-energy house such as temperature, humidity, air pressure, and energy consumption for the prediction of energy consumption). C S = 16 sensor modalities were obtained in total by using the documentation and information provided by the authors of each dataset. A sDNN trained on the source domain therefore had a softmax layer with 16 units, each outputting the probability that a segment belongs to one sensor modality. The complete list of modalities in the source domain is provided in Table 1.
It should be noted that OPPORTUNITY and gas-mixture are notably larger than the other datasets. The question of balancing the source datasets therefore arose. We tested two approaches: one downsampling the largest dataset so that all datasets provide a balanced contribution, the other taking as much data as possible from each dataset. Both approaches yielded comparable performances, in accordance with a similar analysis where the quantity of data to train transferred models is changed [11]. We report in the following discussions the best performances attained by the aforementioned two approaches.

4. Experiments for Wearable-Based Human Activity Recognition

In this section, we introduce the Cognitive Village-MSBand dataset for wearable-based HAR—referred to as CogAge dataset for the sake of simplicity—and the results of the experiments carried out on it.

4.1. Dataset Description

The CogAge dataset was built by considering human activities as a series of simpler actions, referred to as atomic activities. It aggregates the data from four subjects performing a total of 61 different atomic activities split into two distinct categories: six state activities characterising the pose of a subject, and 55 behavioral activities characterising his/her behavior. The complete list of activities is provided in Table 2. It can be noted that a behavioral activity can be performed while being in a particular state (e.g., drinking can be performed either while sitting or standing). Because this overlap between state and behavioral activities could potentially prevent a proper definition of classes (e.g., drinking while sitting could either be classified as drinking or sitting), two classification problems were considered, one considering exclusively the six state activities, the other the 55 behavioral activities.
All four subjects were asked to wear three different devices during the data acquisition process:
  • Google NEXUS 5X smartphone placed in a subject’s front left pocket, providing five different sensor modalities: three-axis accelerometer, gravity sensor, gyroscope, linear accelerometer (all sampled at 200 Hz) and magnetometer (50 Hz).
  • Microsoft Band 2 placed on a subject’s left arm, providing two different sensor modalities: three-axis accelerometer and gyroscope (67 Hz).
  • JINS MEME glasses placed on the subjects’ head, providing five different sensor modalities: three-axis accelerometer and gyroscope (20 Hz), blink speed, strength measurements, and eye-movement measurements (all discrete signals indicating an event).
All four subjects took part in two data acquisition sessions (#1 and #2) where each of 61 atomic activities is executed at least 20 times, and each execution lasts for 5 s. Because the smartwatch was placed on the left arm, the choice of the arm performing some behavioral atomic actions indicated with a * in Table 2 may impact the recognition performances. Two different datasets were therefore created for the behavioral classification problem: one gathering executions only performed by the left hand, the other gathering executions performed indifferently by the left or right hand. We refer to the former and the latter as Behavioral Left-Hand-Only (BLHO) and Behavioral Both-Hands (BBH) datasets, respectively. To build the training and testing sets, we followed two strategies: one splitting the data using a subject-dependent setup where data from the same subjects are included in the training and testing sets, the other using a subject-independent split where distinct subsets of subjects provide data for the training and testing sets. The subject-dependent classification problem has a higher simplicity, while the subject-independent one is more representative of real use-cases. For the subject-dependent split, the data from session #1 were used as a training set, while those from session #2 as a testing set. The total number of executions of each dataset is summarised in Table 3. For the subject-independent setup, a leave-one-subject-out cross validation was performed: the data from one subject in both sessions #1 and #2 were used as testing set, and the data from the three other subjects as training set. All four subjects were used as testing subject once. The number of executions per subject is provided in Table 4.

4.2. Experimental Setup

Because of their different nature (data characterised by spikes instead of continuous values), the blink speed, strength, and eye-movement signals of the JINS glasses were not used in this study. In addition, preliminary experiments using all devices showed that the smartphone magnetometer had little impact on the final classification performances. Our baseline study therefore used the smartphone accelerometer, gyroscope, gravity sensor, linear accelerometer, the data from the smartwatch accelerometer and gyroscope, and the data from the JINS glasses accelerometer and gyroscope.
The differences in sampling frequencies of those sensors affect the size of the 5-s segments, and the shape of the input of our DNN models. To take this into account, we define three different sDNNs processing data coming from the smartphone, smartwatch, and smartglasses, respectively. One sDNN is associated with all channels generated from one of the three devices, as shown in Figure 3. The outputs of all sDNNs are then concatenated and fed into fully-connected and softmax layers, as shown in Figure 3.
Because of data transmission problems, all channels do not necessarily have a length of exactly 5 s. We therefore decided to use the first 4 s of each record. This leads to segments of shape L s p × S s p = 800 × 12 , L s w × S s w = 267 × 6 and L s g × S s g = 80 × 6 for the smartphone, smartwatch, and JINS glasses, respectively. For our CNN-transfer approach, three sDNNs are trained separately for sensor modality classification on the source domain, each taking input of sizes L s p , L s w and L s g , respectively. The resulting mDNN comprises S = 12 + 6 + 6 = 24 branches. The weights of each sDNN are then transferred to the mDNN of one device. As indicated in Section 3, we use the OPPORTUNITY [41], gas-mixture [42], EEG-eye-state [43] and energy-appliance [44] datasets to build the source domain. For comparison, we report the performances of the following two approaches:
  • Train on Target Only (TTO): Baseline approach which only trains a mDNN on the target domain, without using transfer learning. The weights of the mDNN are initialised using a Glorot uniform initialisation [9].
  • Variational Autoencoder-Transfer (VAE-transfer): Approach which trains a sDNN on the source domain in an unsupervised way. The sDNN to be transferred is considered as the encoder part of a convolutional Variational Autoencoder (VAE) [45]. The encoder of a VAE learns the parameters of a Gaussian probability density characterising a compressed representation of the input in a lower dimensional space called embedding space. A sample is then drawn from such learned Gaussian distribution and sent as input of a decoder—DNN whose structure mirrors the encoder—which is trained to reconstruct the encoder input on its output layer. The ensemble encoder–decoder is trained to reproduce the segments of the source domain as accurately as possible. The weights of the encoder are then transferred to a mDNN. For the CogAge dataset in particular, three VAEs taking input of sizes L s p , L s w and L s g , respectively, are trained and transferred.
We also tested a third approach which transfers weights from the source domain without performing any fine-tuning on the target domain. It, however, yielded performances significantly worse than all other methods. We thus decided not to report the results of this approach. The three aforementioned approaches tested on the CogAge dataset are summarised in Figure 4.
The hyper-parameters of the mDNN were firstly optimised by trial and error for TTO. Both CNN-transfer and VAE-transfer were then performed by re-using the same parameters. All DNN parameters are provided on our repository whose link is provided in the Supplementary Materials of this paper. For CNN-transfer, all sDNNs were trained for 25 epochs using the ADADELTA optimiser [46] with a categorical cross-entropy loss function. For VAE-transfer, the encoder–decoder ensemble was trained for 10 epochs using an ADADELTA optimiser. A Mean Square Error reconstruction term regularised by the Kullback–Leibler divergence between the Gaussian distribution learned by the encoder and the Standard Normal Distribution was used as loss function. In addition, 90% of the source data were used as training set. The remaining 10% were used as a validation set to validate the sDNN parameters. In the case of TTO, the weights of the mDNN were initialised using the Glorot uniform initialisation. The mDNN was then fine-tuned for the three classification problems—state, BLHO and BBH—using the ADADELTA optimiser with a categorical cross-entropy loss function for 150 epochs. All models were coded using the Keras library (version 2.4.2) with Tensorflow backend(version 1.12), and trained using a 16 GB RAM machine with an Intel i7-7700K CPU and a Nvidia GTX 1080Ti GPU.
The accuracy, the average F1-score (AF1), and Mean Average Precision (MAP) were used as evaluation metrics. The MAP is based on the computation of class Average Precisions (APs). For each class, test examples are ordered by decreasing probabilities provided by the softmax layer of a mDNN. Precision is then computed at each position of an example of the class in the ordered list. Those precisions are then averaged to compute the AP of the class, and the class APs averaged to yield the MAP. Because of potential overlapping between state and behavioural activities, AP is a convenient metric for examining whether an execution was preferentially classified into the most relevant class or not.

4.3. Results

The results of the three classification problems are provided in Table 5 for the subject-dependent configuration and Table 6 for the subject-independent one. The main observations for both setups can be highlighted as follows:
  • The results of the state classification problem are relatively uniform across our transfer and the baseline approaches. We attribute this to two factors. Firstly, the state classification problem is significantly simpler than BBH or BLHO because it contains a low number of fairly distinct classes. Secondly, the fairly small size of the testing set which makes a few misclassified examples result in a drop of a few percent(s) in evaluation metrics. With these factors in mind, it can be observed that CNN-transfer, TTO and VAE-transfer all return predictions on the testing set differ only on a few examples.
  • For behavioural activity classification, VAE-transfer performs mediocre overall, and ends up yielding results worse than both CNN-transfer and TTO.
  • Our transfer approach consistently yields better results than TTO for both BLHO and BBH classification problems. It can be noted that CNN-transfer provides better performances than TTO for all test subjects in the subject-independent configuration.
A detailed analysis of class APs for behavioural activities was carried out in the subject-dependent configuration to examine whether some transfer setups could benefit particular activities or not. Class AP plots are uploaded on our repository (link provided in the Supplementary Materials of this paper). We could observe the superiority of CNN-transfer by computing some global statistics on all activities. CNN-transfer yielded better class APs than TTO for 40/55 and 42/55 behavioral activities for BLHO and BBH, respectively. CNN-transfer obtained an 8.59% and 8.27% improvement in AP compared to TTO for the BLHO and BBH classification, respectively. For others’ activities, CNN-transfer underperformed TTO by smaller margins, with an average AP gap of 3.07% and 3.54% for the BLHO and BBH classification, respectively. The overall results suggest that CNN-transfer allows for obtaining performance improvements compared to TTO, but it is difficult to designate activities which specifically benefit from the transfer. We could in particular check that activities similar to those contained in the OPPORTUNITY dataset (which is part of the source domain, and contains data about opening and closing doors/drawers) were not always yielding better results compared to TTO.

5. Experiments for Wearable-Based Emotion Recognition

This section presents the results of our transfer learning approach for ER. The experiments are conducted on a popular benchmark dataset for wearable-based ER: DEAP [15].

5.1. Dataset Description

The DEAP dataset aggregates data from 32 subjects who watched 40 one-minute-long music videos, selected to induce a wide range of emotions. During the experiments, each subject was wearing on his/her head a sensor equipment yielding a total of 40 sensor channels ( S = 40 ): 32 EEG channels and eight channels returning peripheral physiological signals (EOG, EMG, GSR, BVP, temperature, and respiration). The labeling was performed using the Circumplex model which decomposes emotions along two main axes: arousal (level of excitement) and valence (level of pleasantness). Each subject was asked after each visualisation to rate his/her level of arousal and valence on a 9-point scale from 1 (very low) to 9 (very high). We used the pre-processed version of the dataset, with all 40 channels downsampled to a frequency of 128 Hz.

5.2. Experimental Setup

To evaluate data labelled using the Circumplex model, numerous studies defined emotion recognition either as a 2-class problem between low (<5) and high (⩾5) arousal/valence, or 3-class problem between low (<3), medium (⩾3 and <6), and high (⩾6). Sensor-based ER is still a relatively immature research topic due to its inherent difficulty caused by several factors such as challenges to get properly labelled data, the high intra-class variability when using physiological signals, etc. As a result, a large part of the ER literature performed experiments in a subject-dependent context, while the few subject-independent studies could only report mediocre classification results [37,47]. We therefore decided to use the subject-dependent setup of [7] in which the authors trained a bi-modal autoencoder processing both EEG and other modalities and taking non-overlapping segments of 1 s ( L = 128 ) as inputs for a 2-class classification problem for arousal and valence. The data segments from all 32 subjects of the DEAP dataset were mixed and evenly split into folds for a 10-fold cross-validation. In our experiments, we train two mDNNs with S = 40 branches, one for arousal and the other for valence classification. Both are evaluated using the classification accuracy as an evaluation metric.
Similarly to the CogAge dataset, we test the performances of the TTO, VAE-transfer, and CNN-transfer approaches on DEAP, which are defined in Section 4.2. The parameters of the mDNN were firstly optimised for TTO by trial and error, and then re-used for both CNN-transfer and VAE-transfer. All DNN parameters are provided on our repository whose link is provided in the Supplementary Materials of this paper. For CNN-transfer and each of arousal and valence classifications, an sDNN was trained for 100 epochs using the ADADELTA optimiser with a categorical cross-entropy loss function. For VAE-transfer, the same training setup as described in Section 4.2 for the CogAge dataset is used. In addition, 90% of the source data were used as a training set. The remaining 10% were used as validation set to validate the sDNN parameters. Weights of the sDNN were transferred to construct a mDNN on the target domain. The rest of the weights of the mDNN were initialised using the Glorot uniform initialisation [9]. Two mDNNs—one for arousal and the other for valence—were then fine-tuned for each fold using the ADADELTA optimiser with a categorical cross-entropy loss function for 300 epochs.

5.3. Results

The results for arousal and valence classification on the DEAP dataset are summarised in Table 7 and Table 8, respectively. Similarly to the CogAge dataset, VAE-transfer was again outperformed by both CNN-transfer and TTO. CNN-transfer consistently outperformed TTO on all 10 folds for both arousal and valence classification problems. This validates the effectiveness of our transfer approach. In addition, our method yielded significantly better results than those obtained in [7] using bi-modal AEs.

6. Analysis

The experiments on both CogAge and DEAP datasets showed that the best performances were obtained by our transfer method based on supervised pre-training using sensor modality labels. Our transfer approach is based on the assumption that it could help in cases where labelled training data on the target dataset are scarce. In order to further check this assumption, we carried out additional experiments with reduced amounts of training data on both CogAge (subject-dependent configuration) and DEAP datasets. On both target datasets, we randomly downsampled the training dataset to 5, 25, 50, and 75% of its original size while keeping the same number of testing examples, and compute the classification performances on the same number of testing examples. Table 9, Table 10 and Table 11 show the results for State/BLHO/BBH classification on CogAge, arousal classification on DEAP, and valence classification on DEAP, respectively.
The main observation is that CNN-transfer keeps outperforming TTO at all levels of downsampling of the target training sets. CNN-transfer yields a consistent improvement for both BLHO and BBH problems compared to TTO at all downsampling levels. Performances on the state classification remain relatively uniform between all three tested methods in most configurations due to the highest simplicity of the problem. Larger differences in performances can be observed in the case where the training set was downsampled the most (5%). In that configuration, CNN-transfer clearly outperforms the two other approaches which indicates its effectiveness in configurations with few training examples. The same consistency can be observed on the DEAP dataset as CNN-transfer also outperformed TTO on all 10 folds of the dataset. VAE-transfer remains outperformed by both TTO and CNN-transfer in most tested configurations.
To obtain a better idea of the reasons behind the performance improvements of our transfer approach compared to the case without transfer, we performed a low-level analysis on neurons of an mDNN to identify differences between TTO and CNN-transfer. On both CogAge (subject-dependent setup) and DEAP, TTO and CNN-transfer were respectively used to train two mDNNs with the same architecture (shown in Figure 3). For each neuron or layer of this architecture, we can compare metrics computed on the mDNN trained with TTO to those on the mDNN trained with CNN-transfer. This way, we can find in which neurons and layers the biggest differences (or similarities) can be found between TTO and CNN-transfer.
Given two trained mDNNs—one using TTO and the other using CNN-transfer—we computed an importance score for each neuron which indicates its relevance (and the one of the feature it encodes) to the target classification problem. For this, we used the Neuron Importance Score Propagation (NISP) [48] and Infinite Feature Selection (InfFS) [49] methods. NISP can be applied to any DNN involving fully-connected, convolutional or pooling layers. It is a score backpropagation method which assumes that importance scores are available for the neurons on the penultimate layer (i.e., last one before the softmax layer). Those scores can be obtained by any feature ranking approach. Similarly to [48], we chose the InfFS method to compute them, mainly since it showed its effectiveness for DNN architectures involving convolutional layers. NISP then backpropagates the InfFS scores to the prior layers so that an importance score can be attributed to each neuron of the DNN. Let n ( k ) be the number of neurons of the k th layer, s i ( k ) be the importance score of the i th neuron of the k th layer ( 1 k n ( k ) ), w i j ( k ) be the weight connecting the i th neuron of the ( k 1 ) th layer to the j th neuron of the k th layer ( 1 i n ( k 1 ) and 1 j n ( k ) ). Then, s i ( k ) is computed as
s i ( k ) = j = 1 n ( k + 1 ) | w i j ( k ) | s j ( k + 1 )
In other words, the importance score of a neuron in a layer is the sum of the scores of all neurons of the next layer that it has a connection with, weighted by the absolute value of the neural weights. This formula can be used to backpropagate importance scores in fully-connected layers. How to apply it to convolutional and pooling layers can be found in the supplementary materials of [48].
The study was carried out for BBH classification on the CogAge dataset, as we thought that the largest performance gap between TTO and CNN-transfer would lead to the clearest differences between their neuron importance scores (we obtained similar results for BLHO classification on the CogAge dataset; associated plots are provided on our repository whose link is provided in the Supplementary Materials). Using NISP and InfFS, we got vectors of neuron importance scores for all layers of both mDNNs trained using TTO and CNN-transfer. We refer to the vectors of importance scores of the k th layer as v T T O ( k ) = { s 1 , T T O ( k ) , , s n ( k ) , T T O ( k ) } R n ( k ) and v C T ( k ) = { s 1 , C T ( k ) , , s n ( k ) , C T ( k ) } R n ( k ) for TTO and CNN-transfer, respectively. We then applied a min-max normalisation on each v * ( k ) R n ( k ) (with { T T O , C T } ) to obtain a normalised vector of scores v ˜ * ( k ) R n ( k ) . Because NISP backpropagates only positive scores, the absolute values of neuron importance scores in one layer increase as the layer is closer to the input of the DNN. This normalisation was performed to allow comparison between scores of layers independently of their depth. For the all layers, we finally computed the Euclidean distance between both v ˜ T T O ( k ) and v ˜ C T ( k ) by
D ( k ) = | | v ˜ T T O ( k ) v ˜ C T ( k ) | | 2
that we refer to as the difference of the k th layer. This allows us to determine which layers were the most similar or dissimilar after the training using TTO and CNN-transfer. Figure 5 shows the layer differences D ( k ) arranged in decreasing order.
We can observe that, for some layers, the differences in neuron importance scores between TTO and CNN-transfer are fairly significant. Since each layer encodes specific features, this indicates differences in the features learned using either CNN-transfer or TTO. We analysed the features encoded by the layers with the highest score differences between TTO and CNN-transfer. As shown in Figure 3, each layer belonging to one branch processes data coming from a certain device, i.e., smartphone, smartwatch, or smartglasses. We therefore categorised layers depending on the device (as depicted by the colours in Figure 5). Layers not belonging to any branch (e.g., concatenation or fully-connected layers) were categorised as “other”. We observed that layers with the highest differences encode features computed on data coming from the smartwatch.
Preliminary experiments carried out on the CogAge dataset showed that the smartwatch was the most important device for the classification of behavioural activities, which indicates that it also provides the most relevant features. We could confirm this by checking which channels of the input data were the most important to the classification of behavioural activities. For this, we computed the Jacobian matrix of the mDNN trained by TTO or CNN-transfer, following an approach similar to [50]. The mDNN estimates the predictive function f T : R L × S R C T , where L is the length of a multichannel segment X belonging to the target dataset X T , S is the number of channels of this segment, and C T is the number of classes. The multichannel segment X = ( x l s ) l , s R L × S is a L × S matrix where each element x l s represents the value at the l th time point ( 1 l L ) of the s th sensor channel ( 1 s S ). In addition, f T associates X to a vector of softmax probabilities for C T classes, f T ( X ) = ( f T , 1 ( X ) , , f T , c ( X ) , , f T , C T ( X ) ) ( 1 c C T ). Under this setting, a Jacobian value is defined as
J c , l , s ( X ) = x l s f T , c ( X )
It gives the information on how much the variation in x l s affects the softmax probability for the c th class. J c , l , s ( X ) can be used to determine which x l s in X would matter the most for the classification of X into the c th class: the smaller J c , l , s ( X ) is (in absolute value), the less impact variations in x l s are, and therefore the less important x l s is. In contrast, x l s associated with higher J c , l , s ( X ) (in absolute value) are more important.
We apply this reasoning “channel-wise” to X. In particular, we compute a channel-wise Jacobian score for X ω s ( X ) as the average of absolute J c , l , s ( X ) over all the L time points and all the C T classes, that is,
ω s ( X ) = 1 C T 1 L c = 1 C T l = 1 L | J c , l , s ( X ) |
ω s ( X ) indicates the overall importance of the s th channel for the classification of X. Finally, we compute a global channel-wise Jacobian score Ω s by averaging ω s ( X ) over all segments in X T :
Ω s = 1 c a r d ( X T ) X X T ω s ( X )
A high value of Ω s indicates a high importance of the s th sensor channel for the classification problem.
Figure 6 shows the values of Ω s obtained for both mDNNs trained by TTO and CNN-transfer for the S = 24 sensor channels for BBH classification on the CogAge dataset. It can be observed that the scores obtained for both CNN-transfer and TTO do not significantly differ. For both of them, some input sensor channels such as smartphone gyroscope ( k { 7 , 8 , 9 } in Figure 6) and linear acceleration ( k { 10 , 11 , 12 } ), all smartwatch modalities ( k { 13 , 14 , 15 , 16 , 17 , 18 } ) contribute more to the target task than the others, especially accelerometer and gyroscope of the smartglasses ( k { 19 , 20 , 21 , 22 , 23 , 24 } ). The highest Jacobian scores are obtained for channels of the smartwatch which matches the observations on our preliminary experiments on the CogAge dataset.
The NISP+InfFS and Jacobian experiments for behavioral activity classification on the CogAge dataset showed that the layers of the mDNN processing the most useful sensor channels (Figure 6) also had the largest differences in importance scores between TTO and CNN-transfer (fig:layer-scores). The largest score differences were found in layers processing smartwatch data, which is the device providing the most important data for BBH and BLHO classification for both TTO and CNN-transfer. This indicates that the transferred features on the most important channels were successfully fine-tuned into more discriminative features, while not causing loss of information in the other channels. Our future work will focus on confirming whether this phenomenon also occurs for different target domains by carrying out the same experiments on other target datasets.

7. Conclusions

In this paper, we proposed a deep transfer learning approach which could be generally be applied to a variety of classification problem using non-sparse time-series data. It is based on an idea to build a source dataset containing as many different sensor modalities as possible: existing time-series datasets used for various applications are aggregated, segmented, and labelled with their corresponding sensor modality. The source dataset is then used to train a sDNN that encodes general time-series features to perform sensor modality classification. Then, a mDNN is constructed by replicating and fine-tuning the sDNN for each of the sensor channels of the target domain. The architecture of the mDNN allows for handling different target domains regardless of their numbers of sensor channels. Our approach was tested against two baselines—TTO and VAE-transfer—on two very different target domains: wearable-based HAR and ER. For wearable HAR, we also introduced the Cognitive Village-MSBand dataset, a new benchmark dataset for wearable-based HAR.
The results showed that our transfer approach yields the best performances on both tested datasets. This indicates that our method is robust to variations in type and format of the target data. It is also robust to variations in quantity of the training data on the target domain, since our method outperformed the baselines for different amounts of training data on both the CogAge and DEAP datasets. We believe that our method could let researchers bypass the issue of target data scarcity by leveraging existing time-series datasets. Furthermore, our classification experiments on the CogAge and DEAP datasets showed that information relevant to the target problem could in particular be extracted from completely unrelated source datasets. Although further experiments would be needed to confirm whether such results can be reproduced on other target domains, we foresee that our approach could be useful for ubiquitous computing applications, where acquiring large quantities of labelled data for a specific problem is difficult, but a high number of datasets for various applications is available.
Despite the extensive experiments carried out in this paper, the following two points need further investigation: the first one is to expand the scope of our studies by analysing the impact of adding, removing or picking specific sensor modalities and datasets from the source domain. This could give a better assessment of the robustness of our approach. Following an approach similar to [11], we plan to check the influence of the amount of source data, number and granularity of the classes on the source domain. Additionally, adding different types of time-series in the source domain (e.g., sparse time-series, event-based data from Lifelogging datasets [51], etc.) could be useful to check whether our approach can also work for target applications not using non-sparse time-series data. Finally, testing its performances on additional target domains could verify its generality in a larger scale, and will be performed in future works.
The second point is to provide a further interpretation of the features transferred from the source to the target domain, and why they allow classification models to perform better than not using transfer on the target domain. Some initial insights have been provided in this paper by computing neuron importance scores (relatively to the target classification) using the NISP and InfFS approaches, and the mDNN Jacobian matrix on the CogAge dataset. Those experiments showed that our transfer approach obtains different and more relevant features than the ones obtained by TTO, by re-adapting the transferred features during the fine-tuning phase. However, our analysis remained on a general level by comparing mDNNs trained by our approach and TTO in a layer-wise fashion. In future work, we will check further how importance scores differ for each layer. In particular, importance scores and their distribution among the neurons of each layer will be analysed to identify which of the features learned on the source domain were the most useful for the target domain.

Supplementary Materials

The datasets and codes used for the studies presented in this paper are available online at https://www.info.kindai.ac.jp/~shirahama/transfer/.

Author Contributions

F.L. initiated the research idea, wrote the codes to carry out the experiments, and redacted the manuscript. K.S. provided guidance during the development of the approach, advice regarding the direction of the experiments, and helped with the manuscript revisions and proofreading. M.A.N. and X.H. both provided their assistance for the acquisition of the CogAge dataset, in particular with the development of data acquisition applications. M.G. led the overall research activity, and iteratively contributed to the scientific concept of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

Research and development activities leading to this article have been supported by the German Research Foundation (DFG) as part of the research training group GRK 1564 "Imaging New Modalities”, and the German Federal Ministry of Education and Research (BMBF) within the projects CognitiveVillage (Grant No. 16SV7223K) and ELISE (Grant No. 16SV7512, www.elise-lernen.de).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, J.; Chen, Y.; Hao, H.; Hu, L.; Peng, X. Deep Learning for Sensor-based Activity Recognition: A Survey. Pattern Recognit. Lett. 2018, 119, 3–11. [Google Scholar] [CrossRef] [Green Version]
  2. Li, F.; Shirahama, K.; Nisar, A.M.; Köping, L.; Grzegorzek, M. Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors. Sensors 2018, 18, 679. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Zeiler, M.D.; Fergus, R. Vizualizing and Understanding Convolutional Networks. In Proceedings of the European conference on computer vision, Zurich, Switzerland, 6–12 September 2014; pp. 818–833. [Google Scholar]
  4. Ordonez, F.J.; Roggen, D. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Hammerla, N.Y.; Halloran, S.; Plötz, T. Deep, Convolutional and Recurrent Models for Human Activity Recognition using Wearables. arXiv 2016, arXiv:1604.08880. [Google Scholar]
  6. Li, X.; Zhang, P.; Song, D.; Yu, G.; Hou, Y.; Hu, B. EEG-based Emotion Identification Using Unsupervised Deep Feature Learning. In Proceedings of the NeuroIR, Santiago, Chile, 13 August 2015. [Google Scholar]
  7. Liu, W.; Zheng, W.L.; Lu, B.L. Multimodal Emotion Recognition Using Multimodal Deep-Learning. In Proceedings of the SIGIR2015 Workshop on Neuro-Physiological Methods in IR Research, Santiago, Chile, 13 August 2015; pp. 521–529. [Google Scholar]
  8. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  9. Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Chia Laguna Resort, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
  10. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How Transferable are Features in Deep Neural Networks? In Proceedings of the NIPS, Montréal, QC, Canada, 8–13 December 2014; Volume 27, pp. 3320–3328. [Google Scholar]
  11. Huh, M.; Agrawal, P.; Efros, A.A. What Makes ImageNet Good for Transfer Learning? In Proceedings of the CVPR, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
  12. Fawaz, H.I.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Deep Learning for Time Series Classification: A Review. Data Min. Knowl. Discov. 2019, 33, 917–963. [Google Scholar] [CrossRef] [Green Version]
  13. Fawaz, H.I.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Transfer Learning for Time-Series Classification. In Proceedings of the IEEE International Conference on Big Data, Seattle, WA, USA, 10–13 December 2018. [Google Scholar]
  14. Malhotra, P.; TV, V.; Vig, L.; Agarwal, P.; Shroff, G. TimeNet: Pre-trained deep recurrent neural networks for time series classification. In Proceedings of the ESANN, Bruges, Belgium, 26–28 April 2017. [Google Scholar]
  15. Koelstra, S.; Mühl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis Using Physiological Signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
  16. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the NIPS, Lake Tahoe, CA, USA, 3–8 December 2012; pp. 1097–1105. [Google Scholar]
  17. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the ICLR, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  18. He, K.; Zhang, X.; Ren, S.; Sen, J. Deep Residual Learning for Image Recognition. In Proceedings of the CVPR, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
  19. Donahue, J.; Jia, Y.; Vinyals, O.; Hoffman, J.; Zhang, N.; Tzeng, E.; Darrell, T. DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. In Proceedings of the ICML, Beijing, China, 22–24 June 2014. [Google Scholar]
  20. Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. In Proceedings of the ICLR, Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
  21. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the CVPR, Columbus, OH, USA, 24–27 June 2014; pp. 580–587. [Google Scholar]
  22. Oquab, M.; Bottou, L.; Laptev, I.; Sivic, J. Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks. In Proceedings of the CVPR, Columbus, OH, USA, 24–27 June 2014; pp. 1717–1724. [Google Scholar]
  23. Afridi, M.J.; Ross, A.; Shapiro, E.M. On Automated Source Selection for Transfer Learning in Convolutional Neural Networks. Pattern Recognit. 2018, 73, 65–75. [Google Scholar] [CrossRef]
  24. Guo, Y.; Shi, H.; Kumar, A.; Grauman, K.; Rosing, T.; Feris, R. SpotTune: Transfer Learning Through Adaptive Fine-Tuning. In Proceedings of the CVPR, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
  25. Li, X.; Grandvalet, Y.; Davoine, F. A Baseline Regularisation Scheme for Transfer Learning with Convolutional Neural Networks. Pattern Recognit. 2019, 98, 107049. [Google Scholar] [CrossRef]
  26. Pathak, D.; Krahenbuhl, P.; Donahue, J.; Efros, A.A. Context Encoders: Feature Learning by Inpainting. In Proceedings of the CVPR, Las Vegas, NV, USA, 26–1 July 2016. [Google Scholar]
  27. Cook, D.; Feuz, K.D.; Krishnan, N.C. Transfer Learning for Activity Recognition: A Survey. Knowl. Inf. Syst. 2013, 36, 537–556. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Morales, F.J.O.; Roggen, D. Deep Convolutional Feature Transfer Across Mobile Activity Recognition Domains, Sensor Modalities and Locations. In Proceedings of the ISWC, Kobe, Japan, 17–21 October 2016; pp. 92–99. [Google Scholar]
  29. Khan, A.H.; Roy, N.; Misra, A. Scaling Human Activity Recognition via Deep Learning-based Domain Adaptation. In Proceedings of the PERCOM, Athens, Greece, 19–23 March 2018. [Google Scholar]
  30. Wang, J.; Chen, Y.; Hu, L.; Peng, X.; Yu, P.S. Stratified Transfer Learning for Cross-Domain Activity Recognition. In Proceedings of the PERCOM, Athens, Greece, 19–23 March 2018. [Google Scholar]
  31. Chen, Y.; Keogh, E.; Hu, B.; Begum, N.; Bagnall, A.; Mueen, A.; Batista, G. The UCR Time Series Classification Archive. 2015. Available online: www.cs.ucr.edu/~eamonn/time_series_data/ (accessed on 30 July 2020).
  32. Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
  33. Picard, R.W. Affective Computing; The MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  34. Cowie, R.; Douglas-Cowie, E.; Tsapatsoulis, N.; Votsis, G.; Kollias, S.; Fellenz, W.; Taylor, J.G. Emotion Recognition in Human-Computer Interaction. IEEE Signal Process. Mag. 2001, 18, 32–80. [Google Scholar] [CrossRef]
  35. Grünewald, A.; Krönert, D.; Li, F.; Kampling, H.; Pöhler, J.; Brück, R.; Littau, J.; Schnieber, K.; Piet, A.; Grzegorzek, M.; et al. Biomedical Data Acquisition and Processing to Recognize Emotions for Affective Learning. In Proceedings of the IEEE International Conference on Bioinformatics and Bioengineering, Taichung, Taiwan, 29–31 October 2018. [Google Scholar]
  36. Rozgic, V.; Vitaladevni, S.V.; Prasad, R. Robust EEG emotion classification using segment level decision fusion. In Proceedings of the ICASSP, Vancouver, BC, Canada, 26–30 May 2013; pp. 1286–1290. [Google Scholar]
  37. Jirayucharoensak, S.; Pan-Ngum, S.; Israsena, P. EEG-based Emotion Recognition using Deep Learning Network with Principal Component Based Covariate Shift Adaptation. Sci. World J. 2014, 14, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Ma, J.; Tang, H.; Zheng, W.L.; Lu, B.L. Emotion Recognition using Multimodal Residual LSTM Network. In Proceedings of the ACMMM, Nice, France, 21–25 October 2019; pp. 176–183. [Google Scholar]
  39. Smirnov, D.; Nguifo, E.M. Time-series Classification with Recurrent Neural Networks. In Proceedings of the ECML/PKDD Workshop on Advanced Analytics and Learning on Temporal Data, Dublin, Ireland, 10–14 September 2018. [Google Scholar]
  40. Dheeru, D.; Taniskidou, E.K. UCI Machine Learning Repository. 2017. Available online: http://archive.ics.uci.edu/ml (accessed on 30 July 2020).
  41. Chavarriaga, R.; Sagha, H.; Calatroni, A.; Digumarti, S.; Tröster, G.; Millán, J.D.R.; Roggen, D. The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognit. Lett. 2013, 34, 2033–2042. [Google Scholar] [CrossRef] [Green Version]
  42. Fonollosa, J.; Sheik, S.; Huerta, R.; Marco, S. Reservoir computing compensates slow response of chemosensor arrays exposed to fast varying gas concentrations in continuous monitoring. Sens. Actuators B Chem. 2015, 215, 618–629. [Google Scholar] [CrossRef]
  43. Roesler, O. The EEG Eye State Dataset. 2013. Available online: https://archive.ics.uci.edu/ml/datasets/EEG+Eye+State# (accessed on 30 July 2020).
  44. Candanedo, L.M.; Feldheim, V.; Deramaix, D. Data driven prediction models of energy use of appliances in a low-energy house. Energy Build. 2017, 140, 81–97. [Google Scholar] [CrossRef]
  45. Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. In Proceedings of the ICML, Beijing, China, 22–24 June 2014. [Google Scholar]
  46. Zeiler, M.D. ADADELTA: An Adaptive Learning Rate Method. arXiv 2012, arXiv:1212.5701. [Google Scholar]
  47. Li, X.; Song, D.; Zheng, P.; Zheng, Y.; Hou, Y.; Hu, B. Exploring EEG Features in Cross-subject Emotion Recognition. Front. Neurosci. 2019, 12, 162. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Yu, R.; Li, A.; Chen, C.; Morariu, J.L.V.I.; Han, X.; Gao, M.; Lin, C. NISP: Pruning Networks using Neuron Importance Score Propagation. In Proceedings of the CVPR, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9194–9203. [Google Scholar]
  49. Roffo, G.; Melzi, S.; Cristani, M. Infinite Feature Selection. In Proceedings of the ICCV, Araucano Park, Chile, 11–18 December 2015; pp. 4202–4210. [Google Scholar]
  50. Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. In Proceedings of the ICLR Workshop, Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
  51. Gurrin, C.; Smeaton, A.F.; Doherty, A.R. LifeLogging: Personal Big Data. Found. Trends Inf. Retr. 2014, 8, 1–107. [Google Scholar] [CrossRef]
Figure 1. An overview of our transfer learning method. (a) A labelled source dataset of single-channel sequences ( X S , Y S ) is created by collecting segments x i ( j ) of length L from M datasets and attributing them sensor modality labels y i ( j ) . ( X S , Y S ) is then used to train a sDNN that predicts the sensor modality of each segment. (b) A mDNN is built to learn the predictive target function f T . The weights of the trained sDNN are transferred to the mDNN. The latter is then fine-tuned on the target domain using ( X T , Y T ) .
Figure 1. An overview of our transfer learning method. (a) A labelled source dataset of single-channel sequences ( X S , Y S ) is created by collecting segments x i ( j ) of length L from M datasets and attributing them sensor modality labels y i ( j ) . ( X S , Y S ) is then used to train a sDNN that predicts the sensor modality of each segment. (b) A mDNN is built to learn the predictive target function f T . The weights of the trained sDNN are transferred to the mDNN. The latter is then fine-tuned on the target domain using ( X T , Y T ) .
Sensors 20 04271 g001
Figure 2. mDNN used for the learning of f T on the target domain. The input segments of the target dataset X T are sent through a batch normalisation layer. All sensor channels are then separated and processed by S branches with the same number and type of hidden layers as the sDNN trained on the source dataset ( X S , Y S ) . The outputs of the S branches are concatenated and sent through fully-connected and softmax layers for classification. The mDNN is fine-tuned using the target dataset ( X T , Y T ) .
Figure 2. mDNN used for the learning of f T on the target domain. The input segments of the target dataset X T are sent through a batch normalisation layer. All sensor channels are then separated and processed by S branches with the same number and type of hidden layers as the sDNN trained on the source dataset ( X S , Y S ) . The outputs of the S branches are concatenated and sent through fully-connected and softmax layers for classification. The mDNN is fine-tuned using the target dataset ( X T , Y T ) .
Sensors 20 04271 g002
Figure 3. Model used on the CogAge dataset. Each of the three mDNNs processes the smartphone (sp), smartwatch (sw) or smartglasses (sg) data. L * and S * with { s p , s w , s g } refer to the segment length and number of sensor channels, respectively. Outputs from the three mDNNs are concatenated and fed into fully-connected and softmax layers.
Figure 3. Model used on the CogAge dataset. Each of the three mDNNs processes the smartphone (sp), smartwatch (sw) or smartglasses (sg) data. L * and S * with { s p , s w , s g } refer to the segment length and number of sensor channels, respectively. Outputs from the three mDNNs are concatenated and fed into fully-connected and softmax layers.
Sensors 20 04271 g003
Figure 4. Flowchart of the three approaches tested on the CogAge dataset: TTO (no transfer), VAE-transfer, and CNN-transfer. The mDNN follows the architecture described in Figure 3.
Figure 4. Flowchart of the three approaches tested on the CogAge dataset: TTO (no transfer), VAE-transfer, and CNN-transfer. The mDNN follows the architecture described in Figure 3.
Sensors 20 04271 g004
Figure 5. Layer differences D ( k ) for all layers of mDNNs trained using TTO and CNN-transfer for BBH classification on the CogAge dataset. Each bar corresponds to a layer and represents its difference between TTO and CNN-transfer. Layer differences are arranged in decreasing order. For each of them, we indicate if it was computed for a layer belonging to a branch processing smartphone, smartwatch, or smartglasses data. Layers not belonging to any branch (e.g., concatenation or fully-connected layers) are categorised as “other”.
Figure 5. Layer differences D ( k ) for all layers of mDNNs trained using TTO and CNN-transfer for BBH classification on the CogAge dataset. Each bar corresponds to a layer and represents its difference between TTO and CNN-transfer. Layer differences are arranged in decreasing order. For each of them, we indicate if it was computed for a layer belonging to a branch processing smartphone, smartwatch, or smartglasses data. Layers not belonging to any branch (e.g., concatenation or fully-connected layers) are categorised as “other”.
Sensors 20 04271 g005
Figure 6. Global channel-wise Jacobian scores Ω k for mDNNs trained by TTO (red) and CNN-transfer (blue). These scores are computed for BBH on the testing set of the CogAge dataset. sp, sw and sg refer to smartphone, smartwatch, and smartglasses, respectively.
Figure 6. Global channel-wise Jacobian scores Ω k for mDNNs trained by TTO (red) and CNN-transfer (blue). These scores are computed for BBH on the testing set of the CogAge dataset. sp, sw and sg refer to smartphone, smartwatch, and smartglasses, respectively.
Sensors 20 04271 g006
Table 1. List of sensor modalities in the source domain using the OPPORTUNITY, gas-mixture, EEG-Eye-State and energy-appliance datatsets (obtained from the documentation of each respective dataset). The respective units of measurement are provided in parenthesis when the information was available.
Table 1. List of sensor modalities in the source domain using the OPPORTUNITY, gas-mixture, EEG-Eye-State and energy-appliance datatsets (obtained from the documentation of each respective dataset). The respective units of measurement are provided in parenthesis when the information was available.
Source DatasetSensor Modalities
OPPORTUNITY· Acceleration (in milli g)· IMU EU (in degree)
· IMU magnetometer· IMU angular velocity (in mm·s 1 )
· IMU gyroscope· IMU compass (in degree)
· IMU acceleration (normalised value in milli g)
gas-mixture· Gas concentration (in ppm)· Conductance (in k Ω 1 )
EEG-eye-state· EEG
energy-appliance· Energy use (in W·h 1 )· Pressure (in mmHg)
· Temperature (in C)· Wind speed (in m·s 1 )
· Humidity (in %)· Visibility (in km)
Table 2. List of the state and behavioral activities of the Cognitive Village dataset. For activities with a * symbol, executions with either the left or right hand were distinguished.
Table 2. List of the state and behavioral activities of the Cognitive Village dataset. For activities with a * symbol, executions with either the left or right hand were distinguished.
State activities
Standing      Sitting      Lying      Squatting      Walking      Bending
Behavioral activities
Sit downStand upLie down
Get upSquat downStand up from squatting
Open door *Close door *Open drawer *
Close drawer *Open small box *Close small box *
Open big boxClose big boxOpen lid by rotation *
Close lid by rotation *Open other lid *Close other lid *
Open bagTake from floor *Put on floor *
BringPut on high position *Take from high position *
Take out *Eat small thing *Drink *
Scoop and put *Plug in *Unplug *
Rotate *Throw out *Hang
UnhangWear jacketTake off jacket
ReadWrite*Type *
Talk using telephone *Touch smartphone screen *Open tap water *
Close tap water *Put from tap water *Put from bottle *
Throw out water *GargleRub hands
Dry off hands by shakeDry off handsPress from top *
Press by grasp *Press switch/button *Clean surface *
Clean floor
Table 3. Number of 5-second executions for each subset of the CogAge dataset. Executions of session #1 and #2 were respectively used to build the training and testing sets in the subject-dependent setup.
Table 3. Number of 5-second executions for each subset of the CogAge dataset. Executions of session #1 and #2 were respectively used to build the training and testing sets in the subject-dependent setup.
DatasetSession #1Session #2
State260275
BLHO16921705
BBH22842288
Table 4. Number of 5-s executions for each subject of the CogAge dataset.
Table 4. Number of 5-s executions for each subject of the CogAge dataset.
DatasetSubject #1Subject #2Subject #3Subject #4
State165120120130
BLHO986872718821
BBH1297109610781101
Table 5. Accuracies, Average F1-Scores, and MAPs (in %) obtained by TTO, VAE-transfer, and CNN-transfer for the state, BLHO, and BBH classification problems in the subject-dependent configuration. The best performances for each classification problem and evaluation metric are highlighted in bold.
Table 5. Accuracies, Average F1-Scores, and MAPs (in %) obtained by TTO, VAE-transfer, and CNN-transfer for the state, BLHO, and BBH classification problems in the subject-dependent configuration. The best performances for each classification problem and evaluation metric are highlighted in bold.
Transfer ApproachStateBLHOBBH
Acc.AF1MAPAcc.AF1MAPAcc.AF1MAP
TTO95.9195.9497.6371.9571.7275.0367.9467.6572.00
VAE-transfer94.7894.7797.9364.4464.0967.3761.3161.0465.18
CNN-transfer95.9495.9497.6276.4476.0779.0971.8571.4175.14
Table 6. Accuracies, Average F1-Scores, and MAPs (in %) obtained by TTO, VAE-transfer, and CNN-transfer for the state, BLHO, and BBH classification problems in the subject-independent configuration (leave-one-subject-out cross-validation). The best average performances for each classification problem and evaluation metric are highlighted in bold.
Table 6. Accuracies, Average F1-Scores, and MAPs (in %) obtained by TTO, VAE-transfer, and CNN-transfer for the state, BLHO, and BBH classification problems in the subject-independent configuration (leave-one-subject-out cross-validation). The best average performances for each classification problem and evaluation metric are highlighted in bold.
Transfer
Approach
Fold IndexStateBLHOBBH
Acc.AF1MAPAcc.AF1MAPAcc.AF1MAP
TTO187.2786.9593.7333.8132.2331.8729.7726.5229.96
291.6791.5795.1952.4750.3752.3246.4544.0146.24
395.9994.1695.9934.6130.5133.5530.5828.7432.93
490.6690.8297.1555.5252.0156.9647.6944.2747.71
Average91.4090.8895.5244.1041.2843.6838.6235.8939.21
Standard-deviation3.592.981.4411.5011.4812.829.779.589.07
VAE-transfer183.5883.1388.1335.0231.6230.3531.7428.6128.02
290.0089.6693.2651.5847.9952.6543.4842.0943.89
395.0094.9598.5935.1832.0636.8429.0128.5431.51
482.5876.3593.7954.4350.0352.5448.2343.6146.38
Average87.7986.0293.4544.0540.4343.1038.1235.7137.45
Standard-deviation5.828.064.2810.409.9511.299.218.279.04
CNN-transfer188.2187.5692.4036.3333.9935.3933.5730.0531.07
284.1783.8992.5852.9849.3453.9247.2345.3351.15
398.3398.3398.0138.2833.6640.6632.9432.5236.03
491.5291.5696.6058.8555.4062.0450.1946.6053.86
Average90.5690.3494.9049.3943.1048.0040.9838.6343.03
Standard-deviation5.996.182.8411.6810.9912.189.018.5511.18
Table 7. 10-fold cross validation accuracies (in %) for the classification of AROUSAL using a multichannel DNN on the DEAP dataset. F i refers to fold number i. The best performance for each fold is highlighted in bold.
Table 7. 10-fold cross validation accuracies (in %) for the classification of AROUSAL using a multichannel DNN on the DEAP dataset. F i refers to fold number i. The best performance for each fold is highlighted in bold.
ApproachF1F2F3F4F5F6F7F8F9F10Average
Bi-modal AE [7]----------80.50
TTO89.2688.1387.5587.6988.1388.0588.6488.7587.5987.9188.17
VAE-transfer83.2284.8784.9385.0483.7484.8684.7184.5585.1283.9384.50
CNN-transfer90.8991.6091.1891.4691.3791.5390.7991.5991.6490.8091.29
Table 8. 10-fold cross validation accuracies (in %) for the classification of VALENCE using a multichannel DNN on the DEAP dataset. F i refers to fold number i. The best performance for each fold is highlighted in bold.
Table 8. 10-fold cross validation accuracies (in %) for the classification of VALENCE using a multichannel DNN on the DEAP dataset. F i refers to fold number i. The best performance for each fold is highlighted in bold.
ApproachF1F2F3F4F5F6F7F8F9F10Average
Bi-modal AE [7]----------85.20
TTO87.6787.0387.8586.9387.2687.4488.0387.0887.7587.2587.43
VAE-transfer85.1784.8683.9285.4884.7584.0684.2385.4284.9084.6984.75
CNN-transfer90.8991.1290.2290.3990.5190.2790.7190.3991.0890.8490.64
Table 9. Accuracies, AF1s, and MAPs (in %) of TTO, VAE-transfer, and CNN-transfer after downsampling of the training set, for the classification of state, BLHO, and BBH activities on the CogAge dataset (subject-dependent configuration).
Table 9. Accuracies, AF1s, and MAPs (in %) of TTO, VAE-transfer, and CNN-transfer after downsampling of the training set, for the classification of state, BLHO, and BBH activities on the CogAge dataset (subject-dependent configuration).
Transfer
Approach
Training Target
Data Proportion (%)
StateBLHOBBH
Acc.AF1MAPAcc.AF1MAPAcc.AF1MAP
TTO566.5661.5168.7019.0516.2416.6219.8615.3115.82
2588.7888.9294.1651.9851.2350.9050.8350.2152.67
5093.7893.6796.8860.8960.7462.0758.9058.3961.99
7595.1495.1897.7371.1171.2473.1862.7862.7065.99
10095.9195.9497.6371.9571.7275.0367.9467.6572.00
VAE
transfer
559.7957.4464.5519.0514.8817.4315.1113.0313.49
2589.1488.9291.4239.2037.3438.5332.9232.4633.00
5089.0989.0595.6951.9151.5252.7947.5247.3649.18
7594.8494.8197.9161.4560.5862.3551.2950.6053.90
10094.7894.7797.9364.4464.0967.3761.3161.0465.18
CNN
transfer
567.6467.7073.7423.3918.1020.4125.9222.0524.13
2590.4790.6194.1255.8955.4656.9756.9555.6557.51
5094.6094.4797.7366.6666.0168.5962.8362.2266.22
7595.8895.8797.1173.0672.4774.9966.2966.2969.95
10095.9495.9497.6276.4476.0779.0971.8571.4175.14
Table 10. 10-fold cross validation accuracies (in %) of TTO, VAE-transfer, and CNN-transfer after downsampling of the training set for the classification of AROUSAL on the DEAP dataset. F i refers to fold number i.
Table 10. 10-fold cross validation accuracies (in %) of TTO, VAE-transfer, and CNN-transfer after downsampling of the training set for the classification of AROUSAL on the DEAP dataset. F i refers to fold number i.
Transfer
Approach
Training
Target Data
Proportion (%)
F1F2F3F4F5F6F7F8F9F10Average
TTO564.1164.8164.0063.6463.8165.4965.7467.4464.6264.6164.83
2578.4776.2876.0876.6676.7577.5374.9176.9277.8076.2376.76
5082.0881.5682.2984.3482.0383.9482.6882.4282.9083.1682.74
7585.8887.4987.2585.1687.0585.1385.5986.2286.8685.8886.25
10089.2688.1387.5587.6988.1388.0588.6488.7587.5987.9188.17
VAE
transfer
563.5264.2664.2065.8064.8364.2464.4264.9164.0463.7063.70
2574.1274.7974.5974.7874.4574.6373.7674.4974.0973.6774.34
5078.9979.9280.3180.0579.5080.2579.4979.3479.7779.1279.67
7582.1483.0782.5683.1282.5482.1382.4882.6982.1982.2582.52
10083.2284.8784.9385.0483.7484.8684.7184.5585.1283.9384.50
CNN
transfer
565.6665.5865.9166.6465.7165.4566.9465.3166.9066.4866.06
2579.8680.4779.9780.3479.5978.9179.5880.0580.3980.1879.93
5086.7986.9586.3286.7786.4887.5486.4186.5886.7886.7686.74
7589.3789.6689.4690.6089.7289.7489.1289.5589.6688.8189.57
10090.8991.6091.1891.4691.3791.5390.7991.5991.6490.8091.29
Table 11. 10-fold cross validation accuracies (in %) of TTO, VAE-transfer, and CNN-transfer after downsampling of the training set for the classification of VALENCE on the DEAP dataset. F i refers to fold number i.
Table 11. 10-fold cross validation accuracies (in %) of TTO, VAE-transfer, and CNN-transfer after downsampling of the training set for the classification of VALENCE on the DEAP dataset. F i refers to fold number i.
Transfer
Approach
Training
Target Data
Proportion (%)
F1F2F3F4F5F6F7F8F9F10Average
TTO564.2764.3664.2962.7163.8663.5063.0865.1763.7662.7963.78
2577.7875.6076.3376.2376.2674.8875.9376.1276.4476.0776.16
5082.4981.7881.8981.8881.6382.6181.3681.8782.7683.2182.15
7585.4285.9784.7685.6485.2185.6284.9286.2185.1386.0485.49
10087.6787.0387.8586.9387.2687.4488.0387.0887.7587.2587.43
VAE
transfer
564.1164.4363.2965.0564.5565.2264.1465.3564.7265.3864.62
2574.6773.5774.3174.7073.2674.3274.5274.0274.4874.1874.20
5080.4180.1179.9480.4980.1380.1180.1280.5381.0579.3380.22
7583.7982.6882.7383.9583.3382.6882.7882.8583.4182.9683.12
10085.1784.8683.9285.4884.7584.0684.2385.4284.9082.9683.12
CNN
transfer
565.3164.9465.4864.1663.9664.1965.4565.2565.4365.1564.93
2580.2779.9178.8179.0179.5080.0179.3281.4779.4580.3879.81
5086.4486.3785.6086.7785.1685.7185.5485.6886.7486.7186.07
7588.8689.4389.0988.9889.3588.7589.0189.5388.8589.4589.13
10090.8991.1290.2290.3990.5190.2790.7190.3991.0890.8490.64

Share and Cite

MDPI and ACS Style

Li, F.; Shirahama, K.; Nisar, M.A.; Huang, X.; Grzegorzek, M. Deep Transfer Learning for Time Series Data Based on Sensor Modality Classification. Sensors 2020, 20, 4271. https://doi.org/10.3390/s20154271

AMA Style

Li F, Shirahama K, Nisar MA, Huang X, Grzegorzek M. Deep Transfer Learning for Time Series Data Based on Sensor Modality Classification. Sensors. 2020; 20(15):4271. https://doi.org/10.3390/s20154271

Chicago/Turabian Style

Li, Frédéric, Kimiaki Shirahama, Muhammad Adeel Nisar, Xinyu Huang, and Marcin Grzegorzek. 2020. "Deep Transfer Learning for Time Series Data Based on Sensor Modality Classification" Sensors 20, no. 15: 4271. https://doi.org/10.3390/s20154271

APA Style

Li, F., Shirahama, K., Nisar, M. A., Huang, X., & Grzegorzek, M. (2020). Deep Transfer Learning for Time Series Data Based on Sensor Modality Classification. Sensors, 20(15), 4271. https://doi.org/10.3390/s20154271

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop