As a Research Assistant, I specialized in the field of explainable Artificial Intelligence (AI), particularly in enhancing the interpretability of neural networks.
Work Duration
January 2022 - August 20223
Projects and Responsibilities
1. Support in Evaluating Results in Explainable AI
- Objective: Enhance the explainability of neural networks through innovative approaches.
- Tasks:
- Contributed to training a neural network on the weights and biases of another network to generate decision trees.
- Focused on understanding and interpreting complex AI models.
2. Implementation and Evaluation of Neural Network Architectures
- Objective: Improve the performance of neural networks in explaining other networks.
- Tasks:
- Implemented and evaluated various neural network architectures.
- Aimed at enhancing the ability of these networks to elucidate the workings of other AI models.
3. Implementation and Evaluation of GRANDE Approach
- Objective: Conduct performance comparisons of gradient-based decision tree approaches.
- Tasks:
- Evaluated the GRANDE method on diverse datasets.
- Conducted thorough performance comparisons to assess the efficacy of this gradient-based decision tree training approach.
4. Contribution to the XAE Paper Related to My Master’s Thesis
- Objective: Advance academic research in the field of explainable AI.
- Tasks:
- Actively involved in writing the XAE paper as part of my Master’s thesis.
- Focused on exploring new frontiers in AI explainability and contributing to academic discourse.
Skills and Technologies
- Explainable AI: Gained deep insights into making AI models more interpretable and transparent.
- Neural Network Architectures: Developed expertise in various architectures for improved AI explainability.
- GRANDE Method: Acquired skills in gradient-based decision tree training methods.
- Academic Research: Enhanced my research and academic writing skills, contributing to the field of AI.
Conclusion
This position as a Research Assistant was pivotal in developing my skills in the realm of explainable AI, leading me to pursue further research and innovation in this field. The experience has been instrumental in shaping my academic and professional journey towards advanced AI studies and applications.
Acknowledgements
I would like to thank Sascha Marton for his invaluable guidance and support throughout this journey. His mentorship deeply enriched my understanding and proficiency in the field, significantly contributing to my professional growth and future research direction.