Which LIME should I trust?

Concepts, Challenges and Solutions

This paper has been accepted at XAI 2025 – see the official version on arXiv.

LIME Framework Overview

LIME (Local Interpretable Model-agnostic Explanations) is a popular XAI framework designed to explain neural network decisions in a model-agnostic and instance-specific manner. It supports explanations across various data modalities, including images, text, and tabular data. This page summarizes the key components of LIME, its applications, and known issues.

Key Components of LIME

LIME Framework Overview

Figure: Illustration of LIME explanations.

Key Issues in LIME

Despite its strengths, LIME faces some limitations:

Title Year Issue Feature
Generation
Sample
Generation
Feature
Attribution
Explanation
Representation
Code?

Suggest a New Paper

Leave empty if no code link is available.

Applications of LIME

LIME has been applied to complex AI models such as CNNs, LSTMs, and transformer architectures in healthcare, finance, manufacturing, and more. Enhancements have been proposed to improve its efficiency, fidelity, and stability.

Citation

Copy the citation in your preferred format:

BibTeX
@inproceedings{knab2025limeitrustconcepts,
        title={Which LIME should I trust? Concepts, Challenges, and Solutions},
        author={Patrick Knab and Sascha Marton and Udo Schlegel and Christian Bartelt},
        booktitle={Proceedings of the XAI 2025 Conference},
        year={2025},
        eprint={2503.24365},
        archivePrefix={arXiv},
        primaryClass={cs.LG},
        url={https://arxiv.org/abs/2503.24365},
    }
Plain Text
Patrick Knab, Sascha Marton, Udo Schlegel, and Christian Bartelt (2025). Which LIME should I trust? Concepts, Challenges, and Solutions. Accepted at XAI 2025 Conference. Retrieved from https://arxiv.org/abs/2503.24365