Universal Adversarial Attacks on Text Classifiers

Despite the vast success neural networks have achieved in different application domains, they have been proven to be vulnerable to adversarial perturbations (small changes in the input), which lead them to produce the wrong output. In this paper, we propose a novel method, based on gradient projection, for generating universal adversarial perturbations for text; namely sequence of words that can be added to any input in order to fool the classifier with high probability. We observed that text classifiers are quite vulnerable to such perturbations: inserting even a single adversarial word to the beginning of every input sequence can drop the accuracy from 93% to 50%.


Présenté à:
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, UK, 2019
Année
2019
Laboratoires:




 Notice créée le 2019-02-27, modifiée le 2019-02-28


Évaluer ce document:

Rate this document:
1
2
3
 
(Pas encore évalué)