Patrick Knab
PhD Candidate

My research focuses on the intersection of Computer Vision and Explainable Artificial Intelligence (XAI), with a particular emphasis on leveraging foundation models to extract meaningful visual concepts from target domains, aiming to make AI systems more transparent, interpretable, and effective.


Education
  • University of Mannheim
    University of Mannheim
    M.Sc. in Information Science
    Jan. 2021 - Aug. 2023
  • Lappeenranta University of Technology, Finland
    Lappeenranta University of Technology, Finland
    Semester Abroad
    Jan. 2020 - Apr. 2020
  • University of Mannheim
    University of Mannheim
    B.S. in Information Science
    Sep. 2017 - Dec. 2020
Experience
  • Clausthal University of Technology
    Clausthal University of Technology
    PhD Candidate
    Feb. 2025 - present
  • University of Mannheim
    University of Mannheim
    PhD Candidate
    Sept. 2023 - Jan. 2025
  • Robert-Bosch GmbH, Bühl
    Robert-Bosch GmbH, Bühl
    Masterand
    Oct. 2022 - Apr. 2023
  • Robert-Bosch GmbH, Bühl
    Robert-Bosch GmbH, Bühl
    Working Student
    Jul. 2022 - Aug. 2023
  • Institute for Enterprise Systems, Mannheim
    Institute for Enterprise Systems, Mannheim
    Scientific Assistant
    Jan. 2022 - Aug. 2023
  • Grosse-Hornke, Münster
    Grosse-Hornke, Münster
    Working Student
    Nov. 2021 - Jul. 2022
  • Robert-Bosch GmbH, Bühl
    Robert-Bosch GmbH, Bühl
    Working Student
    Jan. 2021 - Dec. 2021
  • Robert-Bosch GmbH, Bühl
    Robert-Bosch GmbH, Bühl
    Bachelorand
    Sep. 2020 - Dec. 2020
  • Robert-Bosch GmbH, Bühl
    Robert-Bosch GmbH, Bühl
    Intern
    May 2020 - Aug. 2020
  • Porsche AG, Weissach
    Porsche AG, Weissach
    Working Student
    Oct. 2019 - Dec. 2019
  • Robert-Bosch GmbH, Bühl
    Robert-Bosch GmbH, Bühl
    Intern
    Jul. 2019 - Aug. 2019
Honors & Awards
  • Ideenwettbewerb Regionalpreis Mainz
    2025
News
2025
We’re excited to share that our AI-powered Dart Counter App has won the Ideenwettbewerb Regionalpreis Mainz!
May 05
Selected Publications (view all )
DCBM: Data-Efficient Visual Concept Bottleneck Models
DCBM: Data-Efficient Visual Concept Bottleneck Models

Katharina Prasse*, Patrick Knab*, Sascha Marton, Christian Bartelt, Margret Keuper (* equal contribution)

International Conference on Machine Learning (ICML) 2025

Concept Bottleneck Models (CBMs) enhance the interpretability of neural networks by basing predictions on human-understandable concepts. However, current CBMs typically rely on concept sets extracted from large language models or extensive image corpora, limiting their effectiveness in data-sparse scenarios. We propose Dataefficient CBMs (DCBMs), which reduce the need for large sample sizes during concept generation while preserving interpretability. DCBMs define concepts as image regions detected by segmentation or detection foundation models, allowing each image to generate multiple concepts across different granularities. This removes reliance on textual descriptions and large-scale pre-training, making DCBMs applicable for fine-grained classification and out-of-distribution tasks. Attribution analysis using Grad-CAM demonstrates that DCBMs deliver visual concepts that can be localized in test images. By leveraging dataset-specific concepts instead of predefined ones, DCBMs enhance adaptability to new domains.

DCBM: Data-Efficient Visual Concept Bottleneck Models

Katharina Prasse*, Patrick Knab*, Sascha Marton, Christian Bartelt, Margret Keuper (* equal contribution)

International Conference on Machine Learning (ICML) 2025

Concept Bottleneck Models (CBMs) enhance the interpretability of neural networks by basing predictions on human-understandable concepts. However, current CBMs typically rely on concept sets extracted from large language models or extensive image corpora, limiting their effectiveness in data-sparse scenarios. We propose Dataefficient CBMs (DCBMs), which reduce the need for large sample sizes during concept generation while preserving interpretability. DCBMs define concepts as image regions detected by segmentation or detection foundation models, allowing each image to generate multiple concepts across different granularities. This removes reliance on textual descriptions and large-scale pre-training, making DCBMs applicable for fine-grained classification and out-of-distribution tasks. Attribution analysis using Grad-CAM demonstrates that DCBMs deliver visual concepts that can be localized in test images. By leveraging dataset-specific concepts instead of predefined ones, DCBMs enhance adaptability to new domains.

Which LIME should I trust? Concepts, Challenges and Solutions
Which LIME should I trust? Concepts, Challenges and Solutions

Patrick Knab, Sascha Marton, Udo Schlegel, Christian Bartelt

The World Conference on eXplainable Artificial Intelligence (XAI) 2025

As neural networks become dominant in essential systems, Explainable Artificial Intelligence (XAI) plays a crucial role in fostering trust and detecting potential misbehavior of opaque models. LIME (Local Interpretable Model-agnostic Explanations) is among the most prominent model-agnostic approaches, generating explanations by approximating the behavior of black-box models around specific instances. Despite its popularity, LIME faces challenges related to fidelity, stability, and applicability to domain-specific problems. Numerous adaptations and enhancements have been proposed to address these issues, but the growing number of developments can be overwhelming, complicating efforts to navigate LIME-related research. To the best of our knowledge, this is the first survey to comprehensively explore and collect LIME's foundational concepts and known limitations. We categorize and compare its various enhancements, offering a structured taxonomy based on intermediate steps and key issues. Our analysis provides a holistic overview of advancements in LIME, guiding future research and helping practitioners identify suitable approaches. Additionally, we provide a continuously updated interactive website (https://patrick-knab.github.io/which-lime-to-trust/), offering a concise and accessible overview of the survey.

Which LIME should I trust? Concepts, Challenges and Solutions

Patrick Knab, Sascha Marton, Udo Schlegel, Christian Bartelt

The World Conference on eXplainable Artificial Intelligence (XAI) 2025

As neural networks become dominant in essential systems, Explainable Artificial Intelligence (XAI) plays a crucial role in fostering trust and detecting potential misbehavior of opaque models. LIME (Local Interpretable Model-agnostic Explanations) is among the most prominent model-agnostic approaches, generating explanations by approximating the behavior of black-box models around specific instances. Despite its popularity, LIME faces challenges related to fidelity, stability, and applicability to domain-specific problems. Numerous adaptations and enhancements have been proposed to address these issues, but the growing number of developments can be overwhelming, complicating efforts to navigate LIME-related research. To the best of our knowledge, this is the first survey to comprehensively explore and collect LIME's foundational concepts and known limitations. We categorize and compare its various enhancements, offering a structured taxonomy based on intermediate steps and key issues. Our analysis provides a holistic overview of advancements in LIME, guiding future research and helping practitioners identify suitable approaches. Additionally, we provide a continuously updated interactive website (https://patrick-knab.github.io/which-lime-to-trust/), offering a concise and accessible overview of the survey.

DCBM: Data-Efficient Visual Concept Bottleneck Models
DCBM: Data-Efficient Visual Concept Bottleneck Models

Katharina Prasse*, Patrick Knab*, Sascha Marton, Christian Bartelt, Margret Keuper (* equal contribution)

Conference on Computer Vision and Pattern Recognition (CVPR) @ XAI4CV Workshop 2025

Concept Bottleneck Models (CBMs) enhance the interpretability of neural networks by basing predictions on human-understandable concepts. However, current CBMs typically rely on concept sets extracted from large language models or extensive image corpora, limiting their effectiveness in data-sparse scenarios. We propose Dataefficient CBMs (DCBMs), which reduce the need for large sample sizes during concept generation while preserving interpretability. DCBMs define concepts as image regions detected by segmentation or detection foundation models, allowing each image to generate multiple concepts across different granularities. This removes reliance on textual descriptions and large-scale pre-training, making DCBMs applicable for fine-grained classification and out-of-distribution tasks. Attribution analysis using Grad-CAM demonstrates that DCBMs deliver visual concepts that can be localized in test images. By leveraging dataset-specific concepts instead of predefined ones, DCBMs enhance adaptability to new domains.

DCBM: Data-Efficient Visual Concept Bottleneck Models

Katharina Prasse*, Patrick Knab*, Sascha Marton, Christian Bartelt, Margret Keuper (* equal contribution)

Conference on Computer Vision and Pattern Recognition (CVPR) @ XAI4CV Workshop 2025

Concept Bottleneck Models (CBMs) enhance the interpretability of neural networks by basing predictions on human-understandable concepts. However, current CBMs typically rely on concept sets extracted from large language models or extensive image corpora, limiting their effectiveness in data-sparse scenarios. We propose Dataefficient CBMs (DCBMs), which reduce the need for large sample sizes during concept generation while preserving interpretability. DCBMs define concepts as image regions detected by segmentation or detection foundation models, allowing each image to generate multiple concepts across different granularities. This removes reliance on textual descriptions and large-scale pre-training, making DCBMs applicable for fine-grained classification and out-of-distribution tasks. Attribution analysis using Grad-CAM demonstrates that DCBMs deliver visual concepts that can be localized in test images. By leveraging dataset-specific concepts instead of predefined ones, DCBMs enhance adaptability to new domains.

Beyond Pixels: Enhancing LIME with Hierarchical Features and Segmentation Foundation Models
Beyond Pixels: Enhancing LIME with Hierarchical Features and Segmentation Foundation Models

Patrick Knab, Sascha Marton, Christian Bartelt

International Conference on Learning Representations (ICLR) @ FM-Wild Workshop 2025

LIME (Local Interpretable Model-agnostic Explanations) is a popular XAI framework for unraveling decision-making processes in vision machine-learning models. The technique utilizes image segmentation methods to identify fixed regions for calculating feature importance scores as explanations. Therefore, poor segmentation can weaken the explanation and reduce the importance of segments, ultimately affecting the overall clarity of interpretation. To address these challenges, we introduce the DSEG-LIME (Data-Driven Segmentation LIME) framework, featuring: i) a data-driven segmentation for human-recognized feature generation by foundation model integration, and ii) a user-steered granularity in the hierarchical segmentation procedure through composition. Our findings demonstrate that DSEG outperforms on several XAI metrics on pre-trained ImageNet models and improves the alignment of explanations with human-recognized concepts. The code is available under: https://github.com/patrick-knab/DSEG-LIME

Beyond Pixels: Enhancing LIME with Hierarchical Features and Segmentation Foundation Models

Patrick Knab, Sascha Marton, Christian Bartelt

International Conference on Learning Representations (ICLR) @ FM-Wild Workshop 2025

LIME (Local Interpretable Model-agnostic Explanations) is a popular XAI framework for unraveling decision-making processes in vision machine-learning models. The technique utilizes image segmentation methods to identify fixed regions for calculating feature importance scores as explanations. Therefore, poor segmentation can weaken the explanation and reduce the importance of segments, ultimately affecting the overall clarity of interpretation. To address these challenges, we introduce the DSEG-LIME (Data-Driven Segmentation LIME) framework, featuring: i) a data-driven segmentation for human-recognized feature generation by foundation model integration, and ii) a user-steered granularity in the hierarchical segmentation procedure through composition. Our findings demonstrate that DSEG outperforms on several XAI metrics on pre-trained ImageNet models and improves the alignment of explanations with human-recognized concepts. The code is available under: https://github.com/patrick-knab/DSEG-LIME

All publications