Interpretability of Deep Neural Networks In Continual Learning Settings

Author: Siddhant Bhambri

Virtual Room: https://eu.bbcollab.com/guest/777b6f5df804453381458e244e78d1e8

Date & time: 22/09/2021 – 10:15 h

Session name: Continual Learning

Supervisor: Bogdan Raducanu

Abstract:

Deep learning (DL) using deep neural networks (DNNs) and artificial intelligence (AI) is making our life easy. Today AI is playing crucial roles in one of the most important domains of intelligence which are autonomous driving and medical imaging. We can see in the coming future a very bright and rigorous use of deep learning in almost all sort of domains. However, the black-box nature of DNNs has become one of the primary obstacles for their wide acceptance in mission-critical applications such as medical diagnosis and therapy. Due to the huge potential of deep learning, interpreting neural networks has recently attracted much research attention. In this work we are going to investigate and study the deep aspects of these black boxes (artificial deep neural networks) in the domain of continual learning (CL) settings and try to visualise and analyse the different aspects of the functioning and behaviour of these architectures.

Committee:

– President: Montse Pardàs(UPC)
– Secretary: Àgata Lapedriza Garcia(UOC)
– Vocal: Javier Ruiz Hidalgo(UPC)