A fully differentiable model for unsupervised singing voice separation - Télécom Paris
Communication Dans Un Congrès Année : 2024

A fully differentiable model for unsupervised singing voice separation

Résumé

A novel model was recently proposed by Schulze-Forster et al. in [1] for unsupervised music source separation. This model allows to tackle some of the major shortcomings of existing source separation frameworks. Specifically, it eliminates the need for isolated sources during training, performs efficiently with limited data, and can handle homogeneous sources (such as singing voice). But, this model relies on an external multipitch estimator and incorporates an Ad hoc voice assignment procedure. In this paper, we propose to extend this framework and to build a fully differentiable model by integrating a multipitch estimator and a novel differentiable assignment module within the core model. We show the merits of our approach through a set of experiments, and we highlight in particular its potential for processing diverse and unseen data.
Fichier principal
Vignette du fichier
2023_Icassp_Paper_Chouteau-11.pdf (6.28 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04356813 , version 1 (20-12-2023)
hal-04356813 , version 2 (29-01-2024)

Identifiants

  • HAL Id : hal-04356813 , version 1

Citer

Gael Richard, Pierre Chouteau, Bernardo Torres. A fully differentiable model for unsupervised singing voice separation. IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr 2024, Seoul, South Korea. ⟨hal-04356813v1⟩
366 Consultations
264 Téléchargements

Partager

More