Logo LVSN
EnglishAccueil
A proposPersonnesRecherchePublicationsEvenementsProfil
A propos
Publications

 

 

 

 

CERVIM

REPARTI

MIVIM

Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning


Mahdieh Abbasi, Arezoo Rajabi, Christian Gagné and Rakesh B. Bobba

En savoir plus...

Abstract - Detection and rejection of adversarial examples in security sensitive and safety-critical systems using deep CNNs is essential. In this paper, we propose an approach to augment CNNs with out-distribution learning in order to reduce misclassification rate by rejecting adversarial examples. We empirically show that our augmented CNNs can either reject or classify correctly most adversarial examples generated using well-known methods ( >95% for MNIST and >75% for CIFAR-10 on average). Furthermore, we achieve this without requiring to train using any specific type of adversarial examples and without sacrificing the accuracy of models on clean samples significantly (< 4%).

download documentdownload document

Bibtex:

@inproceedings{Abbasi1201,
    author    = { Mahdieh Abbasi and Arezoo Rajabi and Christian Gagné and Rakesh B. Bobba },
    title     = { Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning },
    booktitle = { DSN Workshop on Dependable and Secure Machine Learning (DSML 2018) },
    year      = { 2018 },
    month     = { June },
    web       = { https://arxiv.org/abs/1804.08794 }
}

Dernière modification: 2018/05/27 par cgagne

     
   
   

©2002-. Laboratoire de Vision et Systèmes Numériques. Tous droits réservés