Applied Deep Learning 2024 - Lecture 12 - Explainable AI
Автор: Alexander Pacha
Загружено: 2024-11-25
Просмотров: 349
It's great that we can train machine learning models, but what if they don't work as we expect them to? How can we know that our trained models are basing their decisions on the right reasons, and are not just guessing, or even worse, are biased from our training dataset which makes the model seem to work fine, but actually doing horrible in practice? In this lecture, we're exploring a couple of methods for getting at least a few explanations about what's going on inside of a model.
Complete Playlist: • Applied Deep Learning 2024 - TU Wien
== Literature ==
1. Molnar, Interpretable Machine Learning. 2019
2. Arrieta et al., Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. 2019
3. Petsiuk et al., RISE: Randomized Input Sampling for Explaination of Black-box Models. 2018
4. Bau et al., GAN Dissection: Visualizing and Understanding Generative Adversarial Networks. 2018
5. Koul et al., Learning Finite State Representations of Recurrent Policy Networks. 2018
6. Ribeiro et al. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. 2016
7. Sarkar. Model Interpretation Strategies. 2018.
8. Lundberg et al. A Unified Approach to Interpreting Model Predictions. 2017
9. Tjoa et al. A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI. 2019
10. Liu et al. Towards Visually Explaining Variational Autoencoders. 2019
11. Mundhenk et al. Efficient Saliency Maps for Explainable AI. 2019
12. Angelov et al. Towards Explainable Deep Neural Networks (xDNN). 2019
13. Fan et al. On Interpretability of Artificial Neural Networks. 2020
14. Lundberg et al. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. 2018
15. Schreiber. Saliency Maps for Deep Learning. 2019
16. Simonyan et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. 2014
17. Yau et al. What Did You Think Would Happen? Explaining Agent Behaviour through Intended Outcomes. 2020
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: