Dynamic Autoencoders Against Adversarial Attacks
Résumé
Neural Networks are the target of numerous adversarial attacks. In those, the adversary perturbs a model's input with a noise that is small, but large enough to fool the model. In this article, we propose to dynamically add autoencoders from a pretrained set to a base model as a countermeasure to such attacks. This doing, we modify the underlying labels regions of the model to be protected, letting the adversary unable to craft relevant adversarial perturbations. Our experiments confirm the efficiency of our protection when the pretrained set has enough elements.