|
Séminaires |
|
13-04-2018 Laboratoire LVSN Dép. de génie électrique et de génie informatique, Université Laval Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution LearningRésumé The easiness at which adversarial instances can be generated in deep neural networks raises some fundamental questions on their functioning and concerns on their use in critical systems. In this presentation, we draw a connection between over-generalization and adversaries: a possible cause of adversaries lies in models designed to make decisions all over the input space, leading to inappropriate high-confidence decisions in parts of the input space not represented in the training set. We empirically show that an augmented CNN, which is not trained on any types of adversaries, can increase the robustness by either rejecting or classifying correctly most adversarial examples generated using some well-known attack methods, without sacrificing the accuracy of the augmented CNN on clean samples significantly.
Le séminaire sera présenté à 11h30 à la salle PLT-1120.
|
||||
©2002-. Laboratoire de Vision et Systèmes Numériques. Tous droits réservés |