AImergence (Artificial Intelligence emergence) is a game in which you play the role of Ernest learning to interact with its environment. Please share and write comments from this page.
Olivier Georgeon's research blog—also known as the story of little Ernest, the developmental agent. Keywords: situated cognition, constructivist learning, intrinsic motivation, bottom-up self-programming, individuation, theory of enaction, developmental learning, artificial sense-making, biologically inspired cognitive architectures, agnostic agents (without ontological assumptions about the environment).
Wednesday, July 1, 2015
Saturday, September 20, 2014
Inverting the interaction cycle
Inverting the Interaction Cycle to Model Embodied Agents (2014). Olivier L. Georgeon and Amélie Cordier. Procedia Computer Science, 41, pp 243-248. 5th Annual International Conference on Biologically Inspired Cognitive Architectures. doi:10.1016/j.procs.2014.11.109.
This paper relates very much to the first lesson of the IDEAL MOOC.
This paper relates very much to the first lesson of the IDEAL MOOC.
Tuesday, July 22, 2014
IDEAL MOOC
Monday, April 14, 2014
Create your Ernest
Monday, February 24, 2014
Lyon Startup Weekend
We won the jury's favorite prize ("coup de coeur du jury", 4th prize) at the Lyon Startup Weekend ! The IDEAL team from left to right: Renaud Detaille, Raphaël Cazorla, Olivier Georgeon, Séverin Bruhat, and Julien Millet. Great thanks to the organizing team for this crazy unforgettable weekend. We love you too !
Watch the finale video (our presentation from time 2:47:37 to 2:56:20, low audio quality French).
Friday, January 24, 2014
Inferring actions and observations from interactions
Garnier J.,
Georgeon O., and
Cordier A., 2013. Inferring Actions and Observations from Interactions. In the Second Workshop on Goal Reasoning at Advances in Cognitive Systems (ACS2013), Baltimore, ML. 26-35.
Following the Radical Interactionism paradigm introduced previously, this paper investigates the construction of intentional actions and meaningful observations from regularities observed in sequences of sensorimotor interactions.
In doing so, we take the opposite stance of most machine learning approaches that seek to learn a mapping between a predefined space of actions and a predefined space of observations. Instead, our agent begins with a predefined space of interactions and learns actions and observations as secondary constructs. This paper also reports the Ernest 12 experiment.
Following the Radical Interactionism paradigm introduced previously, this paper investigates the construction of intentional actions and meaningful observations from regularities observed in sequences of sensorimotor interactions.
In doing so, we take the opposite stance of most machine learning approaches that seek to learn a mapping between a predefined space of actions and a predefined space of observations. Instead, our agent begins with a predefined space of interactions and learns actions and observations as secondary constructs. This paper also reports the Ernest 12 experiment.
Friday, December 6, 2013
Radical Interactionism
Olivier L. Georgeon and David W. Aha 2013. The Radical Interactionism Conceptual Commitment. Journal of Artificial General Intelligence 4(2): 31-36.
This is where I turn radical and eliminate perceptions and actions as primitive notions of cognitive models. Now I wonder why it took so many years to come up with such an obvious and elegant formalism.
This is where I turn radical and eliminate perceptions and actions as primitive notions of cognitive models. Now I wonder why it took so many years to come up with such an obvious and elegant formalism.
Friday, November 15, 2013
Single Agents Can Be Constructivist Too
Olivier L. Georgeon and Salima Hassas 2013. Single Agents Can Be Constructivist too. Constructivist Foundations 9(1): 40-42.
In this open peer commentary to an article by Roesch et al., we wished to argue that multi-agent systems are not the only alternative to cognitivism. We present Ernest in Roesch et al.'s environment to show that Ernest is constructivist too.
In this open peer commentary to an article by Roesch et al., we wished to argue that multi-agent systems are not the only alternative to cognitivism. We present Ernest in Roesch et al.'s environment to show that Ernest is constructivist too.
Thursday, September 12, 2013
Ernest 12
Top-left: Ernest in its environment. The "eye" (half-circle) takes the color of the entity that get Ernest's attention at any given time.
Top-right: Ernest's spatial memory. Interactions are localized in space, and Ernest updates their position as he moves. Entities are constructed where interactions overlap. Rectangles and trapezoids represent interactions, circles represent entities.
Bottom: activity trace. Bottom: interactions (rectangles and trapezoids) enacted on the left, in front, or on the right of Ernest. Middle: the motivational value of the enacted interaction represented as a bargraph (green when positive, red when negative). Top: the actions (half-circles (turn), triangles (try to step forward)) and the entities (blue and green circles) learned over time.
In this run, Ernest, learns the "bishop behavior" during the first 50 steps. On steps 78, we introduce two targets in a raw. The spatial memory shows that Ernest interacts with these two targets at the same time. Ernest's spatial memory (associated with its rudimentary attentional system) allows Ernest to focus on one target at a time.
On step 110, we introduce a "wall brick", and Ernest learns that this kind of entity affords the interaction "bumping". Subsequently, when we introduce a target, Ernest will preferably go towards the target than towards the wall brick because it has learned that the targets are edible.
Ernest 12 implements ECA, the Enactive Cognitive Architecture.
(Demo implemented with Ernest r439 and Vacuum r392)
Tuesday, June 18, 2013
Enactive Robot Learning
Olivier L. Georgeon, Christian Wolf, and Simon Gay 2013. An Enactive Approach to Autonomous Agent and Robot Learning. IEEE Third Joint International Conference on Development and Learning and on Epigenetic Robotics (EPIROB2013). Osaka, Japan. August 18-22 2013.
This paper constitutes a short introductory version of our ECA paper. It also presents the experiment of Ernest7 in an e-puck robot.
This paper constitutes a short introductory version of our ECA paper. It also presents the experiment of Ernest7 in an e-puck robot.
Tuesday, May 14, 2013
Enactive Cognitive Architecture
Olivier L. Georgeon, James B. Marshall, and Riccardo Manzotti 2013. ECA: An enactivist cognitive architecture based on sensorimotor modeling. Biologically Inspired Cognitive Architectures, Volume 6, pp. 46-57, doi: 10.1016/j.bica.2013.05.006. Also presented at BICA2013.
This paper introduces a new way of modeling an agent interacting with an environment called an Enactive Markov Decision Process, inspired by the Theory of Enaction. It also describes Ernest's motivational principle in relation with the autotelic principle (Steels, 2004) and the optimal experience principle (Csikszentmihalyi, 1990). It introduces ECA, the Enactive Cognitive Architecture that drives Ernest 11, and it reports the Ernest 11.2 experiment.
This paper introduces a new way of modeling an agent interacting with an environment called an Enactive Markov Decision Process, inspired by the Theory of Enaction. It also describes Ernest's motivational principle in relation with the autotelic principle (Steels, 2004) and the optimal experience principle (Csikszentmihalyi, 1990). It introduces ECA, the Enactive Cognitive Architecture that drives Ernest 11, and it reports the Ernest 11.2 experiment.
Monday, February 11, 2013
Sensemaking emergence demonstration
Olivier L. Georgeon and James B. Marshall 2013. Demonstrating sensemaking emergence in artificial agents: A method and an example. International Journal of Machine Consciousness, 5(2), pp 131-144, doi: 10.1142/S1793843013500029.
This paper addresses the sensemaking demonstration problem : the problem of demonstrating that an agent gives meaning to or understands its experience. We present a methodology to produce empirical evidence to support or contradict the claim that an agent is capable of a rudimentary form of sensemaking, based on an analysis of the agent's behavior.
As an example, we report an analysis of Ernest's behavior in the Small Loop Problem and we conclude that Ernest is capable of a rudimentary form of sensemaking. This paper is an extended version of our previous paper presented at BICA2012.
This paper addresses the sensemaking demonstration problem : the problem of demonstrating that an agent gives meaning to or understands its experience. We present a methodology to produce empirical evidence to support or contradict the claim that an agent is capable of a rudimentary form of sensemaking, based on an analysis of the agent's behavior.
As an example, we report an analysis of Ernest's behavior in the Small Loop Problem and we conclude that Ernest is capable of a rudimentary form of sensemaking. This paper is an extended version of our previous paper presented at BICA2012.
Tuesday, November 20, 2012
Training Ernest 7
Ernest 1 develops more sophisticated behaviors than Ernest 2 because he is trained to touch both of its sides when he faces a wall. Consequently, after being released, he has a more exploratory behavior than Ernest 2.
Ernest 2 learns to preferably turn to the right when he faces a wall. Consequently, he tends to keep spinning in limited areas of the environment. Ernest 2's learning is limited by the fact that the environment is initially too complex for him to notice sophisticated sequences that involve touching to both sides.
The importance of training is an interesting property of Ernest because it accounts for theories of developmental learning.
(Demo implemented with Ernest r296 and Vacuum r203)
Monday, October 29, 2012
Ernest's source code
Ernest's source code is available here with the instructions to use it. The cleaned-up and tested recommended revision is r296. This revision demonstrates the exact behavior of Ernest 7 reported in the Small Loop Experiment.
Monday, October 15, 2012
Interactional Motivation
Olivier L. Georgeon, James B. Marshall, and Simon L. Gay 2012. Interactional Motivation in Artificial Systems: Between Extrinsic and Intrinsic Motivation. In proceedings of the 2nd Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (EPIROB 2012), San Diego, pp. 1-2.
This paper presents the notion of interactional motivation that drives Ernest, and compares it to reinforcement learning as it is traditionally implemented in Partially Observable Markov Decision Processes (POMDPs).
This paper presents the notion of interactional motivation that drives Ernest, and compares it to reinforcement learning as it is traditionally implemented in Partially Observable Markov Decision Processes (POMDPs).
Tuesday, August 28, 2012
Ernest 11.5 constructs goals
Ernest 11.5 simulates different possible sequences of interactions in spatial memory before selecting the best sequence to enact. These simulations are represented in the bottom-right area of the video. Simulations that produce predictable results (due to information available in spatial memory) are represented with orange outlines. Simulations that produce unpredictable results (due to the lack of information in spatial memory) are represented in blue. The video shows that Ernest learns to simulate increasingly elaborated sequences of interactions as time goes on (see blue squares and triangles spreading in all directions around Ernest from step 253 on).
The high value associated with stepping on flowers favors simulations that lead to even more stepping on flowers. As a result, Ernest learns to make a u-turn to return to a flower when he passes one (see Ernest keeping stepping on the flower from step 260 on).
We find this experiment interesting because it illustrates how an inborn drive can give raise to an explicit goal. Ernest's inborn tendency to step on flowers makes Ernest identify flowers as an interesting goal to reach. Once this goal is recognized, Ernest performs a rudimentary problem-solving computation to reach it. Perhaps the skill to choose a desirable point in space and find a sequence of operations to reach this point underlays higher-level problem-solving skills.
Tuesday, July 3, 2012
Ernest 11.4 recognizes objects
At the beginning, Ernest learns to interact with empty places and with dark-green walls. From step 76 on, he learns to interact with cyan walls. On step 220, we introduce alga, and he starts to learn to interact with them.
Note the funny hesitation on step 234 when Ernest touches an alga for the first time, turns back, and then returns to the alga. Once this new kind of objects is learned, Ernest moves through them without hesitation.
Ernest's previous management of bundles (Ernest 11.2) no longer works in this environment because objects can no longer be identified by disjoint bundles of interactions. Some interactions (e.g., bump) are afforded by different objects (dark-green walls and cyan walls). Ernest 11.4, however, does not actually need to fully recognize objects. He adapts to this environment by only learning "compresences" of pairs of interactions. We borrowed the term compresence from the bundle theory of objects to designate the tie between two interactions that are afforded by the same location in space. In this video, compresences are represented by gray circles that contain interactions (in sequential and spatial memory, top and bottom right areas of the video).
The question of identifying objects by bundles of interactions that are consistently compresent remains an open and difficult question. The notion of compresence seems to be still controversial in philosophy of objects. Identifying objects raises the question of making analogies between objects, and learning categories of objects based on similarities in the interactions that they afford.
In this experiment:
Touching a cyan wall ahead generates a specific feeling (cyan squares). Touching a cyan wall on the side generates the same feeling as touching a dark-green wall on the side (dark-green squares). Bumping into a cyan wall feels the same as bumping into a dark-green wall (red triangles). Once learned, touching walls ahead "evokes" bumping ahead (light-red triangles in spatial memory, bottom right area of the video). As previously, the evocation of bumping refrains Ernest from trying to move forward towards walls.
Touching an alga ahead generates a specific feeling (light-green squares). Touching an alga on the side generates the same feeling as touching an empty square on the side (white squares). Moving to an alga feels the same as moving to an empty square (white triangles).
(Demo implemented with Ernest r261 and Vacuum r186)
Wednesday, May 30, 2012
Ernest 11.3 in e-puck
This video shows the e-puck robot in the "Box Environment" (left). The possibilities of interaction and the LED signals are the same as with Ernest 7 in e-puck.
The top-right part of the video shows the sequential trace with the same symbols as previously.
The bottom-center shows the new spatial memory (in an egocentric referential with the robot's front oriented towards the right). When Ernest enacts an interaction, the area that is concerned by this interaction is marked by a halo in spatial memory. Interactions that concern empty places are in white, interactions that concern walls are in green. The superimposition of different interactions in the same spatial location reveals occurrences of empty place phenomena (white halos) and wall phenomena (green halos).
Note that wrong associations can occur due to false detections. For example: false detection of a wall on the left on steps 221 and 222 (time 2:23). (We turned on additional light to provoke more false detections from step 100, time 0:59.)
The bottom-right part of the video represents coefficients of spatial overlapping between interactions (red segments). The more consistent the overlapping, the shorter the segment. Over time, interactions that concern the same kind of phenomena become grouped together because they consistently overlap. Two bundles of interactions emerge: white interactions form a bundle that represents empty places, green interactions form a bundle that represents walls.
On step 229, the false detection made on step 221 and 222 yields to a wrong association between the two bundles (mixed white and green halo in the center of spatial memory on time 2:25, and long red segment between the two bundles). This wrong association, however, does not impact Ernest's behavior too much because it remains weak.
This experiment demonstrates that Ernest 11.3 handles the imprecision in the robot’s displacements and in the sensors by keeping track of probabilities of presence of phenomena in Ernest's surrounding space. Simultaneously, Ernest gradually learns the notions of empty space phenomenon and wall phenomenon by associating the interactions that they afford. In turn, the recognition of phenomena helps Ernest organize its behavior by prompting interactions adapted to the phenomena that surround him.
Thursday, May 24, 2012
A Challenge for Emergent Cognition
The Small Loop Problem: A challenge for artificial emergent cognition. Olivier L. Georgeon, James B. Marshall. In proceedings of BICA2012, Annual Conference on Biologically Inspired Cognitive Architectures. Palermo, Italy, pp. 137-144. (October 31, 2012).
This paper presents the Small Loop Problem and how Ernest 11.2 handles it.
This paper presents the Small Loop Problem and how Ernest 11.2 handles it.
Wednesday, April 25, 2012
Ernest 7 in e-puck
We implemented "touch" with the infrared sensors available on the front, left, and right side of the robot. The range of these sensors was set to approximately 5cm. When Ernest "touches", the corresponding led flashes. When the touching detects a wall, the two additional leds on the rear flash. When it bumps into a wall, all the leds flash.The symbols in the trace are the same as previously.
This video shows that Ernest learns to touch ahead before moving forward to avoid bumping, and learns to turn when it reaches a wall.
Subscribe to:
Posts (Atom)
