Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Explainable AI for time series via Virtual Inspection Layers

Published: 02 July 2024 Publication History

Abstract

The field of eXplainable Artificial Intelligence (XAI) has witnessed significant advancements in recent years. However, the majority of progress has been concentrated in the domains of computer vision and natural language processing. For time series data, where the input itself is often not interpretable, dedicated XAI research is scarce. In this work, we put forward a virtual inspection layer for transforming the time series to an interpretable representation and allows to propagate relevance attributions to this representation via local XAI methods. In this way, we extend the applicability of XAI methods to domains (e.g. speech) where the input is only interpretable after a transformation. In this work, we focus on the Fourier Transform which, is prominently applied in the preprocessing of time series, with Layer-wise Relevance Propagation (LRP) and refer to our method as DFT-LRP. We demonstrate the usefulness of DFT-LRP in various time series classification settings like audio and medical data. We showcase how DFT-LRP reveals differences in the classification strategies of models trained in different domains (e.g., time vs. frequency domain) or helps to discover how models act on spurious correlations in the data.

Graphical abstract

Display Omitted

Highlights

We propose a new form of explanation for models trained on time series data.
We derive a closed-form formula for relevance propagation through the DFT and STDFT.
We extend evaluations to allow for comparison of explanations in different formats.
We show how our method gives insights into strategies of audio and ECG classifiers.

References

[1]
Bach S., Binder A., Montavon G., Klauschen F., Müller K.-R., Samek W., On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation, PLoS One 10 (7) (2015),.
[2]
Sundararajan M., Taly A., Yan Q., Axiomatic attribution for deep networks, in: Proceedings of the 34th International Conference on Machine Learning - Volume 70, in: ICML, 2017, pp. 3319–3328,.
[3]
Ribeiro M.T., Singh S., Guestrin C., “Why Should I Trust You?”: Explaining the Predictions of Any Classifier, KDD ’16, Association for Computing Machinery, New York, NY, USA, ISBN 9781450342322, 2016, pp. 1135–1144,.
[4]
Lundberg S.M., Lee S.-I., A unified approach to interpreting model predictions, in: Guyon I., Luxburg U.V., Bengio S., Wallach H., Fergus R., Vishwanathan S., Garnett R. (Eds.), Advances in Neural Information Processing Systems 30, Curran Associates, Inc., 2017, pp. 4765–4774,.
[5]
Jeyakumar J.V., Noor J., Cheng Y.-H., Garcia L., Srivastava M., How can I explain this to you? An empirical study of deep neural network explanation methods, in: Larochelle H., Ranzato M., Hadsell R., Balcan M., Lin H. (Eds.), Advances in Neural Information Processing Systems, Vol. 33, Curran Associates, Inc., 2020, pp. 4211–4222,.
[6]
Samek W., Montavon G., Lapuschkin S., Anders C.J., Müller K.-R., Explaining deep neural networks and beyond: A review of methods and applications, Proc. IEEE 109 (3) (2021) 247–278,.
[7]
Rojat T., Puget R., Filliat D., Del Ser J., Gelin R., Díaz-Rodríguez N., Explainable artificial intelligence (XAI) on TimeSeries data: A survey, 2021,. arXiv preprint 2104.00950.
[8]
Purwins H., Li B., Virtanen T., Schlüter J., Chang S.-Y., Sainath T., Deep learning for audio signal processing, IEEE J. Sel. Top. Sign. Proces. 13 (2) (2019) 206–219,.
[9]
Anders C.J., Weber L., Neumann D., Samek W., Müller K.-R., Lapuschkin S., Finding and removing clever hans: Using explanation methods to debug and improve deep models, Inf. Fusion (ISSN ) 77 (2022) 261–295,.
[10]
Deshpande G., Batliner A., Schuller B.W., AI-based human audio processing for COVID-19: A comprehensive overview, Pattern Recognit. (ISSN ) 122 (2022),.
[11]
García-Martínez B., Fernández-Caballero A., Alcaraz R., Martínez-Rodrigo A., Assessment of dispersion patterns for negative stress detection from electroencephalographic signals, Pattern Recognit. (ISSN ) 119 (2021),.
[12]
Cheng D., Yang F., Xiang S., Liu J., Financial time series forecasting with multi-modality graph neural network, Pattern Recognit. (ISSN ) 121 (2022),.
[13]
Theissler A., Spinnato F., Schlegel U., Guidotti R., Explainable AI for time series classification: A review, taxonomy and research directions, IEEE Access 10 (2022) 100700–100724,.
[14]
Ismail Fawaz H., Forestier G., Weber J., Idoumghar L., Muller P.-A., Deep learning for time series classification: a review, Data Min. Knowl. Discov. (ISSN ) 33 (4) (2019) 917–963,.
[15]
Slijepcevic D., Horst F., Horsak B., Lapuschkin S., Raberger A.-M., Kranzl A., Samek W., Breiteneder C., Schöllhorn W.I., Zeppelzauer M., Explaining machine learning models for clinical gait analysis, ACM Trans. Comput. Healthc. 3 (2) (2022) 1–27,.
[16]
Becker S., Vielhaben J., Ackermann M., Müller K.-R., Lapuschkin S., Samek W., AudioMNIST: Exploring explainable artificial intelligence for audio analysis on a simple benchmark, J. Franklin Inst. B (ISSN ) 361 (1) (2024) 418–428,.
[17]
Strodthoff N., Wagner P., Schaeffter T., Samek W., Deep learning for ECG analysis: Benchmarks and insights from PTB-XL, IEEE J. Biomed. Health Inf. 25 (5) (2021) 1519–1528,.
[18]
Sturm I., Lapuschkin S., Samek W., Müller K.-R., Interpretable deep neural networks for single-trial EEG classification, J. Neurosci. Methods 274 (2016) 141–145,.
[19]
Strodthoff N., Strodthoff C., Detecting and interpreting myocardial infarction using fully convolutional neural networks, Physiol. Meas. 40 (1) (2019),.
[20]
Kratzert F., Herrnegger M., Klotz D., Hochreiter S., Klambauer G., NeuralHydrology – interpreting LSTMs in hydrology, in: Samek W., Montavon G., Vedaldi A., Hansen L.K., Müller K.-R. (Eds.), Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer International Publishing, Cham, ISBN 978-3-030-28954-6, 2019, pp. 347–362,.
[21]
Wang Z., Yan W., Oates T., Time series classification from scratch with deep neural networks: A strong baseline, in: International Joint Conference on Neural Networks, IJCNN, IEEE, 2017, pp. 1578–1585,.
[22]
Mochaourab R., Venkitaraman A., Samsten I., Papapetrou P., Rojas C.R., Post hoc explainability for time series classification: Toward a signal processing perspective, IEEE Signal Process. Mag. 39 (4) (2022) 119–129,.
[23]
Vielhaben J., Bluecher S., Strodthoff N., Multi-dimensional concept discovery (MCD): A unifying framework with completeness guarantees, Trans. Mach. Learn. Res. (ISSN ) (2023).
[24]
Achtibat R., Dreyer M., Eisenbraun I., Bosse S., Wiegand T., Samek W., Lapuschkin S., From attribution maps to human-understandable explanations through concept relevance propagation, Nat. Mach. Intell. 5 (2023) 1006–1019,.
[25]
Gautam S., Höhne M.M.-C., Hansen S., Jenssen R., Kampffmeyer M., This looks more like that: Enhancing self-explaining models by prototypical relevance propagation, Pattern Recognit. (ISSN ) 136 (2023),.
[26]
Mincu D., Loreaux E., Hou S., Baur S., Protsyuk I., Seneviratne M., Mottram A., Tomasev N., Karthikesalingam A., Schrouff J., Concept-based model explanations for electronic health records, in: Proceedings of the Conference on Health, Inference, and Learning, ACM, 2021,.
[27]
Siddiqui S.A., Mercier D., Munir M., Dengel A.R., Ahmed S., TSViz: Demystification of deep learning models for time-series analysis, IEEE Access 7 (2019) 67027–67040,.
[28]
Küsters F., Schichtel P., Ahmed S., Dengel A., Conceptual explanations of neural network prediction for time series, in: 2020 International Joint Conference on Neural Networks, IJCNN, 2020, pp. 1–6,.
[29]
Guidotti R., Monreale A., Spinnato F., Pedreschi D., Giannotti F., Explaining any time series classifier, in: 2020 IEEE Second International Conference on Cognitive Machine Intelligence, CogMI, IEEE Computer Society, Los Alamitos, CA, USA, 2020, pp. 167–176,.
[30]
Ates E., Aksar B., Leung V.J., Coskun A.K., Counterfactual explanations for multivariate time series, in: 2021 International Conference on Applied Artificial Intelligence, ICAPAI, 2021, pp. 1–8,.
[31]
Samek W., Binder A., Montavon G., Lapuschkin S., Müller K.-R., Evaluating the visualization of what a deep neural network has learned, IEEE Trans. Neural Netw. Learn. Syst. 28 (11) (2017) 2660–2673,.
[32]
Schlegel U., Arnout H., El-Assady M., Oelke D., Keim D.A., Towards a rigorous evaluation of XAI methods on time series, in: 2019 IEEE/CVF International Conference on Computer Vision Workshop, ICCVW, 2019, pp. 4197–4201,.
[33]
Kohlbrenner M., Bauer A., Nakajima S., Binder A., Samek W., Lapuschkin S., Towards best practice in explaining neural network decisions with LRP, in: Proceedings of the IEEE International Joint Conference on Neural Networks, IJCNN, 2020, pp. 1–7,.
[34]
Allen J.B., Rabiner L.R., A unified approach to short-time Fourier analysis and synthesis, Proc. IEEE 65 (11) (1977) 1558–1564,.
[35]
Moody G., Mark R., The impact of the MIT-bih arrhythmia database, IEEE Eng. Med. Biol. Mag. 20 (3) (2001) 45–50,.
[36]
Kachuee M., Fazeli S., Sarrafzadeh M., ECG heartbeat classification: A deep transferable representation, in: 2018 IEEE International Conference on Healthcare Informatics, ICHI, 2018, pp. 443–444,.
[37]
Morch N.J., Kjems U., Hansen L.K., Svarer C., Law I., Lautrup B., Strother S., Rehm K., Visualization of neural networks using saliency maps, in: Proceedings of ICNN’95-International Conference on Neural Networks, Vol. 4, IEEE, 1995, pp. 2085–2090,.
[38]
M. Ancona, E. Ceolini, C. Öztireli, M.H. Gross, Towards better understanding of gradient-based attribution methods for Deep Neural Networks, in: International Conference on Learning Representations, 2017.
[39]
Anders C.J., Neumann D., Samek W., Müller K.-R., Lapuschkin S., Software for dataset-wide XAI: From local explanations to global insights with Zennit, CoRelAy, and ViRelAy, 2021,. arXiv preprint 2106.13200. abs/2106.13200.
[40]
Hedström A., Weber L., Bareeva D., Motzkus F., Samek W., Lapuschkin S., Höhne M.M.C., Quantus: An explainable AI toolkit for responsible evaluation of neural network explanation, J. Mach. Learn. Res. 24 (34) (2023) 1–11.
[41]
Montavon G., Bach S., Binder A., Samek W., Müller K.-R., Explaining nonlinear classification decisions with deep taylor decomposition, Pattern Recognit. 65 (2017) 211–222,.
[42]
Hertel L., Phan H., Mertins A., Comparing time and frequency domain for audio event recognition using deep learning, in: 2016 International Joint Conference on Neural Networks, IJCNN, 2016, pp. 3407–3411,.
[43]
Fitch J.L., Holbrook A., Modal vocal fundamental frequency of young adults, Arch. Otolaryngol. 92 (4) (1970) 379–382,.
[44]
Minami K., Nakajima H., Toyoshima T., Real-time discrimination of ventricular tachyarrhythmia with Fourier-transform neural network, IEEE Trans. Biomed. Eng. 46 (2) (1999) 179–185,.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Pattern Recognition
Pattern Recognition  Volume 150, Issue C
Jun 2024
726 pages

Publisher

Elsevier Science Inc.

United States

Publication History

Published: 02 July 2024

Author Tags

  1. Interpretability
  2. Explainable Artificial Intelligence
  3. Time series
  4. Discrete Fourier Transform
  5. Invertible transformations
  6. Audio classification

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Nov 2024

Other Metrics

Citations

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media