Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1609/aaai.v37i8.26098guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

MHCCL: masked hierarchical cluster-wise contrastive learning for multivariate time series

Published: 07 February 2023 Publication History

Abstract

Learning semantic-rich representations from raw unlabeled time series data is critical for downstream tasks such as classification and forecasting. Contrastive learning has recently shown its promising representation learning capability in the absence of expert annotations. However, existing contrastive approaches generally treat each instance independently, which leads to false negative pairs that share the same semantics. To tackle this problem, we propose MHCCL, a Masked Hierarchical Cluster-wise Contrastive Learning model, which exploits semantic information obtained from the hierarchical structure consisting of multiple latent partitions for multivariate time series. Motivated by the observation that fine-grained clustering preserves higher purity while coarse-grained one reflects higher-level semantics, we propose a novel downward masking strategy to filter out fake negatives and supplement positives by incorporating the multi-granularity information from the clustering hierarchy. In addition, a novel upward masking strategy is designed in MHCCL to remove outliers of clusters at each partition to refine prototypes, which helps speed up the hierarchical clustering process and improves the clustering quality. We conduct experimental evaluations on seven widely-used multivariate time series datasets. The results demonstrate the superiority of MHCCL over the state-of-the-art approaches for unsupervised time series representation learning.

References

[1]
Andrzejak, R. G.; Lehnertz, K.; Mormann, F.; Rieke, C.; David, P.; and Elger, C. E. 2001. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Physical Review E, 64(6): 061907.
[2]
Bagnall, A. J.; Dau, H. A.; Lines, J.; Flynn, M.; Large, J.; Bostrom, A.; Southam, P.; and Keogh, E. J. 2018. The UEA multivariate time series classification archive, 2018. CoRR, abs/1811.00075.
[3]
Bagnall, A. J.; Lines, J.; Bostrom, A.; Large, J.; and Keogh, E. J. 2017. The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Min. Knowl. Discov., 31(3): 606-660.
[4]
Belkhouja, T.; Yan, Y.; and Doppa, J. R. 2022. Training Robust Deep Models for Time-Series Domain: Novel Algorithms and Theoretical Analysis. In AAAI, 6055-6063. AAAI Press.
[5]
Caron, M.; Misra, I.; Mairal, J.; Goyal, P.; Bojanowski, P.; and Joulin, A. 2020. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments. In NeurIPS.
[6]
Chen, P.; Huang, D.; He, D.; Long, X.; Zeng, R.; Wen, S.; Tan, M.; and Gan, C. 2021. RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning. In AAAI, 1045-1053. AAAI Press.
[7]
Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. E. 2020. A Simple Framework for Contrastive Learning of Visual Representations. In ICML, volume 119 of Proceedings of Machine Learning Research, 1597-1607. PMLR.
[8]
Dave, I. R.; Gupta, R.; Rizve, M. N.; and Shah, M. 2022. TCLR: Temporal contrastive learning for video representation. Comput. Vis. Image Underst., 219: 103406.
[9]
Deldari, S.; Xue, H.; Saeed, A.; He, J.; Smith, D. V.; and Salim, F. D. 2022. Beyond Just Vision: A Review on Self-Supervised Representation Learning on Multimodal and Temporal Data. CoRR, abs/2206.02353.
[10]
Eldele, E.; Ragab, M.; Chen, Z.; Wu, M.; Kwoh, C. K.; Li, X.; and Guan, C. 2021. Time-Series Representation Learning via Temporal and Contextual Contrasting. In IJCAI, 2352-2359. ijcai.org.
[11]
Franceschi, J.; Dieuleveut, A.; and Jaggi, M. 2019. Unsupervised Scalable Representation Learning for Multivariate Time Series. In NeurIPS, 4652-4663.
[12]
Grill, J.; Strub, F.; Altché, F.; Tallec, C.; Richemond, P. H.; Buchatskaya, E.; Doersch, C.; Pires, B. Á.; Guo, Z.; Azar, M. G.; Piot, B.; Kavukcuoglu, K.; Munos, R.; and Valko, M. 2020. Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning. In NeurIPS.
[13]
He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. B. 2020. Momentum Contrast for Unsupervised Visual Representation Learning. In CVPR, 9726-9735. Computer Vision Foundation / IEEE.
[14]
He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In CVPR, 770-778. IEEE Computer Society.
[15]
Khorasgani, S. H.; Chen, Y.; and Shkurti, F. 2022. SLIC: Self-Supervised Learning with Iterative Clustering for Human Action Videos. CoRR, abs/2206.12534.
[16]
Kolesnikov, A.; Zhai, X.; and Beyer, L. 2019. Revisiting Self-Supervised Visual Representation Learning. In CVPR, 1920-1929. Computer Vision Foundation / IEEE.
[17]
Kwapisz, J. R.; Weiss, G. M.; and Moore, S. 2010. Activity recognition using cell phone accelerometers. SIGKDD Explor., 12(2): 74-82.
[18]
Li, J.; Zhou, P.; Xiong, C.; and Hoi, S. C. H. 2021a. Prototypical Contrastive Learning of Unsupervised Representations. In ICLR. OpenReview.net.
[19]
Li, Y.; Hu, P.; Liu, J. Z.; Peng, D.; Zhou, J. T.; and Peng, X. 2021b. Contrastive Clustering. In AAAI, 8547-8555. AAAI Press.
[20]
Lyu, X.; Hüser, M.; Hyland, S. L.; Zerveas, G.; and Rätsch, G. 2018. Improving Clinical Predictions through Unsupervised Time Series Representation Learning. CoRR, abs/1812.00490.
[21]
Malhotra, P.; TV, V.; Vig, L.; Agarwal, P.; and Shroff, G. 2017. TimeNet: Pre-trained deep recurrent neural network for time series classification. In ESANN.
[22]
Micucci, D.; Mobilio, M.; and Napoletano, P. 2016. UniMiB SHAR: a new dataset for human activity recognition using acceleration data from smartphones. CoRR, abs/1611.07688.
[23]
Misra, I.; and van der Maaten, L. 2020. Self-Supervised Learning of Pretext-Invariant Representations. In CVPR, 6706-6716. Computer Vision Foundation / IEEE.
[24]
Qian, H.; Pan, S. J.; and Miao, C. 2021. Weakly-supervised sensor-based activity segmentation and recognition via learning from distributions. Artif. Intell., 292: 103429.
[25]
Qian, H.; Tian, T.; and Miao, C. 2022. What Makes Good Contrastive Learning on Small-Scale Wearable-based Tasks? In KDD, 3761-3771. ACM.
[26]
Rambhatla, S.; Che, Z.; and Liu, Y. 2022. I-SEA: Importance Sampling and Expected Alignment-Based Deep Distance Metric Learning for Time Series Analysis and Embedding. In AAAI, 8045-8053. AAAI Press.
[27]
Ruiz, A. P.; Flynn, M.; Large, J.; Middlehurst, M.; and Bagnall, A. J. 2021. The great multivariate time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Min. Knowl. Discov., 35(2): 401-449.
[28]
Sarfraz, M. S.; Sharma, V.; and Stiefelhagen, R. 2019. Efficient Parameter-Free Clustering Using First Neighbor Relations. In CVPR, 8934-8943. Computer Vision Foundation / IEEE.
[29]
Sarkar, P.; and Etemad, A. 2020. Self-supervised ECG Representation Learning for Emotion Recognition. CoRR, abs/2002.03898.
[30]
Sharma, V.; Tapaswi, M.; Sarfraz, M. S.; and Stiefelhagen, R. 2020. Clustering based Contrastive Learning for Improving Face Representations. In FG, 109-116. IEEE.
[31]
Tonekaboni, S.; Eytan, D.; and Goldenberg, A. 2021. Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding. In ICLR. OpenReview.net.
[32]
Wang, X.; Liu, Z.; and Yu, S. X. 2021. Unsupervised Feature Learning by Cross-Level Instance-Group Discrimination. In CVPR, 12586-12595. Computer Vision Foundation / IEEE.
[33]
Yue, Z.; Wang, Y.; Duan, J.; Yang, T.; Huang, C.; Tong, Y.; and Xu, B. 2022. TS2Vec: Towards Universal Representation of Time Series. In AAAI, 8980-8987. AAAI Press.
[34]
Zhang, Y.; Li, Y.; Zhou, X.; Liu, Z.; and Luo, J. 2021. C3-GAN: Complex-Condition-Controlled Urban Traffic Estimation through Generative Adversarial Networks. In ICDM, 1505-1510. IEEE.

Cited By

View all
  • (2024)Multi-view Self-Supervised Contrastive Learning for Multivariate Time SeriesProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681095(9582-9590)Online publication date: 28-Oct-2024
  • (2024)Deep Learning for Time Series Classification and Extrinsic Regression: A Current SurveyACM Computing Surveys10.1145/364944856:9(1-45)Online publication date: 25-Apr-2024
  • (2024)Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting MaskProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671673(2560-2571)Online publication date: 25-Aug-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
AAAI'23/IAAI'23/EAAI'23: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence
February 2023
16496 pages
ISBN:978-1-57735-880-0

Sponsors

  • Association for the Advancement of Artificial Intelligence

Publisher

AAAI Press

Publication History

Published: 07 February 2023

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 29 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Multi-view Self-Supervised Contrastive Learning for Multivariate Time SeriesProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681095(9582-9590)Online publication date: 28-Oct-2024
  • (2024)Deep Learning for Time Series Classification and Extrinsic Regression: A Current SurveyACM Computing Surveys10.1145/364944856:9(1-45)Online publication date: 25-Apr-2024
  • (2024)Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting MaskProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671673(2560-2571)Online publication date: 25-Aug-2024
  • (2024)HiMTM: Hierarchical Multi-Scale Masked Time Series Modeling with Self-Distillation for Long-Term ForecastingProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679741(3352-3362)Online publication date: 21-Oct-2024
  • (2024)A self-supervised contrastive change point detection method for industrial time seriesEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108217133:PBOnline publication date: 1-Jul-2024

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media