|
Seminars |
|
13-04-2018 Laboratoire LVSN Dép. de génie électrique et de génie informatique, Université Laval Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution LearningAbstract The easiness at which adversarial instances can be generated in deep neural networks raises some fundamental questions on their functioning and concerns on their use in critical systems. In this presentation, we draw a connection between over-generalization and adversaries: a possible cause of adversaries lies in models designed to make decisions all over the input space, leading to inappropriate high-confidence decisions in parts of the input space not represented in the training set. We empirically show that an augmented CNN, which is not trained on any types of adversaries, can increase the robustness by either rejecting or classifying correctly most adversarial examples generated using some well-known attack methods, without sacrificing the accuracy of the augmented CNN on clean samples significantly.
The seminar will be presented at 11:30 a.m. in room PLT-1120.
|
||||
©2002-. Computer Vision and Systems Laboratory. All rights reserved |