Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3650215.3650384acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlcaConference Proceedingsconference-collections
research-article

Feature learning model based on Feature space constraints and Self-Attention learning

Published: 16 April 2024 Publication History

Abstract

Supervised time series anomaly detection is of significant importance in both machine learning research and industrial applications. Autoencoder-based methods have achieved outstanding performance in this field. However, these methods often suffer from the challenge of limited separation between low-dimensional representations of normal and anomalous data. To address this issue, this paper introduces a model called FFSA, which utilizes feature space constraints and self-attention. The FFSA model is built on the foundation of a temporal network-based autoencoder, featuring an information entropy feature learning module and a self-attention feature learning module. The information entropy feature learning module comprises an MLP mapping layer, layer normalization, and an entropy analysis layer, while the self-attention feature learning module includes layer-wise parameter extraction and concatenation, layer normalization, and a self-attention mechanism. These modules work together to increase the differentiation between normal and anomalous data and to extract more informative low-dimensional representations. Experimental results on multiple public datasets demonstrate that the FFSA model outperforms mainstream anomaly detection models, achieving an impressive F1 score of up to 97% on the KDDCUP dataset.

References

[1]
Jordan, M. I., & Mitchell, T. M. 2015. Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255-260.
[2]
Bengio Y, Courville A, Vincent P. Representation learning: A review and new perspectives [J]. IEEE transactions on pattern analysis and machine intelligence, 2013, 35(8): 1798-1828.
[3]
Hawkins D M. Identification of outliers [M]. London: Chapman and Hall, 1980.
[4]
Zong B, Song Q, Min M R, Deep autoencoding gaussian mixture model for unsupervised anomaly detection[C]//International conference on learning representations. 2018.
[5]
Ehsan Variani, Erik McDermott, and Georg Heigold. A gaussian mixture model layer jointly optimized with discriminative features within a deep neural network architecture. In ICASSP, pp. 4270–4274, 2015.
[6]
Hewage P, Behera A, Trovati M, Temporal convolutional neural (TCN) network for an effective weather forecasting using time-series data from the local weather station[J]. Soft Computing, 2020, 24: 16453-16482.
[7]
Shannon C E. A mathematical theory of communication [J]. The Bell system technical journal, 1948, 27(3): 379-423.
[8]
Taud H, Mas J F. Multilayer perceptron (MLP) [J]. Geomatic approaches for modeling land change scenarios, 2018, 451-455.
[9]
Ba J L, Kiros J R, Hinton G E. Layer normalization[J]. arXiv preprint arXiv:1607.06450, 2016.
[10]
Zhang C, Bengio S, Hardt M, Understanding deep learning (still) requires rethinking generalization [J]. Communications of the ACM, 2021, 64(3): 107-115.
[11]
Vaswani A, Shazeer N, Parmar N, Attention is all you need [J]. Advances in neural information processing systems, 2017, 30.

Index Terms

  1. Feature learning model based on Feature space constraints and Self-Attention learning

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICMLCA '23: Proceedings of the 2023 4th International Conference on Machine Learning and Computer Application
    October 2023
    1065 pages
    ISBN:9798400709449
    DOI:10.1145/3650215
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 16 April 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Hebei Natural Science Foundation

    Conference

    ICMLCA 2023

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 19
      Total Downloads
    • Downloads (Last 12 months)19
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 23 Nov 2024

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media