JEFFREYS DIVERGENCE-BASED REGULARIZATION OF NEURAL NETWORK OUTPUT DISTRIBUTION APPLIED TO SPEAKER RECOGNITION - Avignon Université Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

JEFFREYS DIVERGENCE-BASED REGULARIZATION OF NEURAL NETWORK OUTPUT DISTRIBUTION APPLIED TO SPEAKER RECOGNITION

Résumé

A new loss function for speaker recognition with deep neural network is proposed, based on Jeffreys Divergence. Adding this divergence to the cross-entropy loss function allows to maximize the target value of the output distribution while smoothing the non-target values. This objective function provides highly discriminative features. Beyond this effect, we propose a theoretical justification of its effectiveness and try to understand how this loss function affects the model, in particular the impact on dataset types (i.e. in-domain or out-of-domain w.r.t the training corpus). Our experiments show that Jeffreys loss consistently outperforms the state-of-the-art for speaker recognition, especially on out-of-domain data, and helps limit false alarms.
Fichier principal
Vignette du fichier
2023038944.pdf (171.78 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04266620 , version 1 (31-10-2023)

Licence

CC0 - Transfert dans le Domaine Public

Identifiants

  • HAL Id : hal-04266620 , version 1

Citer

Pierre-Michel Bousquet, Mickael Rouvier. JEFFREYS DIVERGENCE-BASED REGULARIZATION OF NEURAL NETWORK OUTPUT DISTRIBUTION APPLIED TO SPEAKER RECOGNITION. ICASSP 2023, Jun 2023, Rhodes, Greece. ⟨hal-04266620⟩
20 Consultations
15 Téléchargements

Partager

Gmail Facebook X LinkedIn More