Dodging the Double Descent in Deep Neural Networks - Télécom Paris Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Dodging the Double Descent in Deep Neural Networks

Résumé

Finding the optimal size of deep learning models is very actual and of broad impact, especially in energy-saving schemes. Very recently, an unexpected phenomenon, the "double descent", has caught the attention of the deep learning community. As the model’s size grows, the performance gets first worse and then goes back to improving. It raises serious questions about the optimal model’s size to maintain high generalization: the model needs to be sufficiently over-parametrized, but adding too many parameters wastes training resources. Is it possible to find, in an efficient way, the best trade-off?Our work shows that the double descent phenomenon is potentially avoidable with proper conditioning of the learning problem, but a final answer is yet to be found. We empirically observe that there is hope to dodge the double descent in complex scenarios with proper regularization, as a simple ℓ 2 regularization is already positively contributing to such a perspective.

Dates et versions

hal-04206447 , version 1 (13-09-2023)

Identifiants

Citer

Victor Quétu, Enzo Tartaglione. Dodging the Double Descent in Deep Neural Networks. 2023 IEEE International Conference on Image Processing (ICIP), Oct 2023, Kuala Lumpur, Malaysia. pp.1625-1629, ⟨10.1109/ICIP49359.2023.10222624⟩. ⟨hal-04206447⟩
29 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More