Dynamic Autoencoders Against Adversarial Attacks - Télécom Paris Accéder directement au contenu
Article Dans Une Revue Procedia Computer Science Année : 2023

Dynamic Autoencoders Against Adversarial Attacks

Résumé

Neural Networks are the target of numerous adversarial attacks. In those, the adversary perturbs a model's input with a noise that is small, but large enough to fool the model. In this article, we propose to dynamically add autoencoders from a pretrained set to a base model as a countermeasure to such attacks. This doing, we modify the underlying labels regions of the model to be protected, letting the adversary unable to craft relevant adversarial perturbations. Our experiments confirm the efficiency of our protection when the pretrained set has enough elements.

Dates et versions

hal-04271007 , version 1 (05-11-2023)

Identifiants

Citer

Hervé Chabanne, Vincent Despiegel, Stéphane Gentric, Linda Guiga. Dynamic Autoencoders Against Adversarial Attacks. Procedia Computer Science, 2023, 220, pp.782-787. ⟨10.1016/j.procs.2023.03.104⟩. ⟨hal-04271007⟩
35 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More