This is where I turn radical and eliminate perceptions and actions as primitive notions of cognitive models. Now I wonder why it took so many years to come up with such an obvious and elegant formalism.
In this open peer commentary to an article by Roesch et al., we wished to argue that multi-agent systems are not the only alternative to cognitivism. We present Ernest in Roesch et al.'s environment to show that Ernest is constructivist too.
Ernest 12 categorizes the entities in its environment based on the possibilities of interaction that they afford, and adjusts its behavior to categories.
Top-left: Ernest in its environment. The "eye" (half-circle) takes the color of the entity that get Ernest's attention at any given time.
Top-right: Ernest's spatial memory. Interactions are localized in space, and Ernest updates their position as he moves. Entities are constructed where interactions overlap. Rectangles and trapezoids represent interactions, circles represent entities.
Bottom: activity trace. Bottom: interactions (rectangles and trapezoids) enacted on the left, in front, or on the right of Ernest. Middle: the motivational value of the enacted interaction represented as a bargraph (green when positive, red when negative). Top: the actions (half-circles (turn), triangles (try to step forward)) and the entities (blue and green circles) learned over time.
In this run, Ernest, learns the "bishop behavior" during the first 50 steps. On steps 78, we introduce two targets in a raw. The spatial memory shows that Ernest interacts with these two targets at the same time. Ernest's spatial memory (associated with its rudimentary attentional system) allows Ernest to focus on one target at a time.
On step 110, we introduce a "wall brick", and Ernest learns that this kind of entity affords the interaction "bumping". Subsequently, when we introduce a target, Ernest will preferably go towards the target than towards the wall brick because it has learned that the targets are edible.
This paper introduces a new way of modeling an agent interacting with an environment called an Enactive Markov Decision Process, inspired by the Theory of Enaction. It also describes Ernest's motivational principle in relation with the autotelic principle (Steels, 2004) and the optimal experience principle (Csikszentmihalyi, 1990). It introduces ECA, the Enactive Cognitive Architecture that drives Ernest 11, and it reports the Ernest 11.2 experiment.
This paper addresses the sensemaking demonstration problem : the problem of demonstrating that an agent gives meaning to or understands its experience. We present a methodology to produce empirical evidence to support or contradict the claim that an agent is capable of a rudimentary form of sensemaking, based on an analysis of the agent's behavior.
As an example, we report an analysis of Ernest's behavior in the Small Loop Problem and we conclude that Ernest is capable of a rudimentary form of sensemaking. This paper is an extended version of our previous paper presented at BICA2012.