LIME Framework Overview
LIME (Local Interpretable Model-agnostic Explanations) is a popular XAI framework designed to explain neural network decisions in a model-agnostic and instance-specific manner. It supports explanations across various data modalities, including images, text, and tabular data. This page summarizes the key components of LIME, its applications, and known issues.
Key Components of LIME

Figure: Illustration of LIME explanations.
- Feature Generation: LIME builds a local, interpretable surrogate model by creating features based on the input modality (e.g., superpixels for images). Segmentation techniques reduce feature complexity.
- Sample Generation: Perturbed samples are generated by selectively modifying features of the input instance.
- Feature Attribution: A proximity measure assigns weights to perturbed samples, and a linear surrogate model approximates the original model’s behavior.
- Explanation Representation: Explanations are provided based on the coefficients of the surrogate model – highlighting key parts of images, text, or tabular data.
Key Issues in LIME
Despite its strengths, LIME faces some limitations:
- Locality Issue (L): Perturbation sampling may not capture the local decision boundary accurately.
- Fidelity Issue (F): The surrogate model might not precisely reflect the behavior of the original model.
- Interpretability Issue (I): The explanation format may require adjustments for clarity across data modalities.
- Stability Issue (S): Explanations can vary with minor input changes.
- Efficiency Issue (E): Generating explanations can be computationally demanding.
Title | Year | Issue |
Feature Generation |
Sample Generation |
Feature Attribution |
Explanation Representation |
Code? |
---|
Suggest a New Paper
Applications of LIME
LIME has been applied to complex AI models such as CNNs, LSTMs, and transformer architectures in healthcare, finance, manufacturing, and more. Enhancements have been proposed to improve its efficiency, fidelity, and stability.
Citation
Copy the citation in your preferred format:
BibTeX
@inproceedings{knab2025limeitrustconcepts, title={Which LIME should I trust? Concepts, Challenges, and Solutions}, author={Patrick Knab and Sascha Marton and Udo Schlegel and Christian Bartelt}, booktitle={Proceedings of the XAI 2025 Conference}, year={2025}, eprint={2503.24365}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.24365}, }
Plain Text
Patrick Knab, Sascha Marton, Udo Schlegel, and Christian Bartelt (2025). Which LIME should I trust? Concepts, Challenges, and Solutions. Accepted at XAI 2025 Conference. Retrieved from https://arxiv.org/abs/2503.24365