Skip to Main content Skip to Navigation

Architecture cognitive constructiviste : un modèle pour concevoir un agent automotivé capable de faire du sens et de construire des connaissances de l'environnement

Abstract : Infants are excellent at interacting with the environment. Especially in the initial phase of cognitive development, they exhibit amazing abilities to generate novel behaviors in unfamiliar situations, and explore actively to learn the best while lacking extrinsic rewards from the environment. These abilities of sense-making and knowledge construction of the environment set them apart from even the most advanced autonomous robots. For most artificial agents (and robots), acquiring such abilities is overwhelming. In most traditional Artificial Intelligence (AI) approaches, learning is usually insufficient, with various biases, and lacks of flexibility. Seeking ways to explain the learning mechanism behind infants’ early cognitive development and try to replicate some of these abilities that babies have for an autonomous agent have become a focal point of recent efforts in robotics and AI research. In this dissertation, I propose a computational model of Constructivist Cognitive Architecture (CCA) as a way towards simulating the early learning mechanism of infants’ cognitive development based on theories of enactive cognition, intrinsic motivation, and constructivist epistemology. Meanwhile, the CCA allows a self-motivated agent to autonomously construct the perception of the environment and acquire capabilities of self-adaption and flexibility to generate proper behaviors to tackle with diverse situations in interacting with the environment. In contrast with traditional cognitive architectures, the introduced model neither initially endows the agent with prior knowledge of its environment, nor supplies it with knowledge during its learning process. Accordingly, I am not proposing an algorithm that optimizes exploration of a predefined problem-space to reach predefined goal states. Instead, I propose a way for the agent to autonomously encode the interaction experiences and reuse behavioral patterns based on the agent’s self-motivation implemented as inborn proclivities that drive the agent in a proactive way. In addition, I present two forms of self-motivation: successfully enacting sequences of interactions (or called autotelic motivation), and preferably enacting interactions that have predefined positive values (or called interactional motivation). Following these drives, the agent autonomously learns regularities afforded by the environment, and constructs causal perception of phenomena whose hypothetical presence in the environment explains these regularities. Furthermore, I propose a Bottom-up hiErarchical sequential Learning model based on the CCA, which is also called BEL-CA, as a solution for an autonomous agent learning hierarchical sequences of behaviors and acquiring capabilities of self-adaptation and flexibility. The agent represents its current situation in terms of perceived affordances that develop through the agent’s experience. This situational representation works as an emerging situation awareness that is grounded in the agent’s interaction with its environment and that in turn generates expectations and activates adapted behaviors. Through its activity and these aspects of behavior (behavioral proclivity, situation awareness, and hierarchical sequential learning), the agent starts to exhibit emergent sensibility, intrinsic motivation, and autonomous learning. Moreover, I introduce an implementation of a toolkit to analyze the learning process at run time, which is called GAIT (Generating and Analyzing Interaction Traces Toolkit). I use GAIT to report and explain the detailed learning process and the structured behaviors that the agent has learned on each decision making step. I report an experiment in which the agent learned to successfully interact with its environment and to avoid unfavorable interactions using regularities discovered through interaction
Complete list of metadata
Contributor : Abes Star :  Contact
Submitted on : Wednesday, October 27, 2021 - 4:17:29 PM
Last modification on : Wednesday, November 3, 2021 - 3:59:34 AM


Version validated by the jury (STAR)


  • HAL Id : tel-03406086, version 1


Jianyong Xue. Architecture cognitive constructiviste : un modèle pour concevoir un agent automotivé capable de faire du sens et de construire des connaissances de l'environnement. Neural and Evolutionary Computing [cs.NE]. Université de Lyon, 2020. English. ⟨NNT : 2020LYSE1242⟩. ⟨tel-03406086⟩



Les métriques sont temporairement indisponibles