Ernest 3.2 can now learn second-order schemas, i.e. schemas of primary schemas. However, he does not yet know how to use them.
In this trace, each line, except the yellow ones, represent a cycle Ernest-Environment. We can understand these lines as "affordances", that is, situations that give rise to some behavior. The weights that triggered this behavior are displayed in orange, the resulting primary schemas are in blue. Each of these affordances has an assessment from Ernest's viewpoint: "Yee!", "Boo.", "A-a!" or "O-o", as explained in previous post.
Like primary schemas, second-order schemas are triples: (context, action, expectation), but now, each of these three elements are affordances. When an affordance is assessed "Yee!" or "A-a", it triggers the learning of a second-order schema (in yellow), made up of the two previously-enacted affordances, appended to the triggering one.
For example, at the beginning of this trace, the second-order schema S12, is made up of schemas S6, S8, and S10 : S6 is S12's context affordance, S8 is his current affordance, and S10 is his expected affordance. That means that when Ernest encounters a situation where S6 has been enacted, he should enact S8 because that should bring him to a situation where S10 can be enacted, and S10 will bring one of those delicious Ys!
Implementing the exploitation of these second-order schemas, however, still raises many questions. How primary schemas and second-order schemas should compete to trigger behavior? How reinforcement should be distributed between these two levels? What when a second-order schema fails? In addition, second-order schemas should also constitute more abstract affordances that could be taken as higher-level schema elements.
Olivier Georgeon's research blog—also known as the story of little Ernest, the developmental agent. Keywords: situated cognition, constructivist learning, intrinsic motivation, bottom-up self-programming, individuation, theory of enaction, developmental learning, artificial sense-making, biologically inspired cognitive architectures, agnostic agents (without ontological assumptions about the environment).
Monday, November 3, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment