Explainable Outlier Detection for Autoencoders in the Domain of Time Series

Master Thesis 2023

By Patrick Knab

In the realm of data analytics, the ability to pinpoint anomalies or outliers is invaluable, particularly when dealing with complex datasets. This study takes us into the depths of using convolutional autoencoders (CAE) for anomaly detection in univariate time series, bridging the gap between advanced AI techniques and practical application.

The Challenge of Black-Box Models in Anomaly Detection

One of the core challenges in leveraging CAEs for anomaly detection is their inherent nature as “black-box” models. While these models are powerful in identifying deviations from the norm, understanding how and why they make these determinations remains a complex task. This study aims not only to harness the power of CAEs but also to demystify their inner workings.

Enhancing Transparency with Explainable AI (XAI)

To tackle the opacity of CAEs, the study introduces the adaptation of popular XAI techniques such as Gradient-weighted Class Activation Mapping (Grad-CAM), Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and Layer-wise Relevance Propagation (LRP). These methodologies are instrumental in peeling back the layers of CAEs, offering a window into their decision-making processes.

Introducing a Novel Quantitative Measurement (QM) Technique

Beyond just using XAI tools, the study innovates further by introducing a quantitative measurement technique. This approach assesses the quality of the explanations provided by the encoder. It does so by measuring the distance of perturbed instances in the latent space mappings, offering a more tangible, measurable aspect to the interpretability of these complex models.

AEE: A Groundbreaking Fusion for Enhanced Interpretation

Perhaps the most striking contribution of this study is the development of the Aggregated Explanatory Ensemble (AEE). This novel approach amalgamates the explanations derived from multiple XAI techniques, harnessing their individual strengths to create a singular, comprehensive interpretation. AEE stands as a testament to the study’s innovative spirit, showcasing how blending various analytical tools can lead to a more profound understanding of AI models.

Conclusion: Paving the Way for a Clearer Understanding of AI in Anomaly Detection

This research is a significant stride towards demystifying the complex mechanisms of convolutional autoencoders in anomaly detection. By integrating XAI techniques and introducing innovative methods like AEE, it sets a new precedent in the field, promising a future where AI’s advanced capabilities are matched by our ability to comprehend and interpret them effectively. The implications of this study extend beyond anomaly detection, offering insights and methodologies that could be applied across various domains where understanding AI decisions is crucial.

new_final_plot

Tags: XAI AE AI