Exploring Semantics in Pretrained Language Model Attention - Equipe de Recherche en Ingénierie des Connaissances Access content directly
Conference Papers Year : 2024

Exploring Semantics in Pretrained Language Model Attention


Abstract Meaning Representations (AMRs) encode the semantics of sentences in the form of graphs. Vertices represent instances of concepts, and labeled edges represent semantic relations between those instances. Language models (LMs) operate by computing weights of edges of per layer complete graphs whose vertices are words in a sentence or a whole paragraph. In this work, we investigate the ability of the attention heads of two LMs, RoBERTa and GPT2, to detect the semantic relations encoded in an AMR. This is an attempt to show semantic capabilities of those models without finetuning. To do so, we apply both unsupervised and supervised learning techniques.
Fichier principal
Vignette du fichier
sem2024.pdf (5.72 Mo) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04634835 , version 1 (04-07-2024)


  • HAL Id : hal-04634835 , version 1


Frédéric Charpentier, Jairo Cugliari, Adrien Guille. Exploring Semantics in Pretrained Language Model Attention. 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024), Jun 2024, Mexico City, Mexico. pp.326-333. ⟨hal-04634835⟩
0 View
0 Download


Gmail Mastodon Facebook X LinkedIn More