Comparing Statistical and Neural Models for Learning Sound Correspondences - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Comparing Statistical and Neural Models for Learning Sound Correspondences

Résumé

Cognate prediction and proto-form reconstruction are key tasks in computational historical linguistics that rely on the study of sound change regularity. Solving these tasks appears to be very similar to machine translation, though methods from that field have barely been applied to historical linguistics. Therefore, in this paper, we investigate the learnability of sound correspondences between a proto-language and daughter languages for two machine-translation-inspired models, one statistical, the other neural. We first carry out our experiments on plausible artificial languages, without noise, in order to study the role of each parameter on the algorithms respective performance under almost perfect conditions. We then study real languages, namely Latin, Italian and Spanish, to see if those performances generalise well. We show that both model types manage to learn sound changes despite data scarcity, although the best performing model type depends on several parameters such as the size of the training data, the ambiguity, and the prediction direction.
Fichier principal
Vignette du fichier
Paper_LT4HALA___Comparing_Statistical_and_Neural_Models_for_Learning_Sound_Correspondences_vF.pdf (299.55 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02529929 , version 1 (02-04-2020)

Identifiants

  • HAL Id : hal-02529929 , version 1

Citer

Clémentine Fourrier, Benoît Sagot. Comparing Statistical and Neural Models for Learning Sound Correspondences. LT4HALA 2020 - First Workshop on Language Technologies for Historical and Ancient Languages, May 2020, Marseille, France. ⟨hal-02529929⟩
155 Consultations
195 Téléchargements

Partager

Gmail Facebook X LinkedIn More