You are here


The Cognitive Systems Laboratory is a multidisciplinary environment that supports research in the following areas:

  • Distributed Adaptive Control
  • Multi-robot exploration and coordination
  • Classical conditioning, operant conditioning and learning models based on the Distributed Adaptive Control framework, which has become a standard in the field of artificial intelligence and behavior based robotics (McFarland and Bosser, 1993; Hendriks-Jansen, 1996; Arkin, 1998; Pfeifer and Scheier, 1999; Clancey 1996; Cordeschi, 2002).

 Distributed Adaptive Control

The Distributed Adaptive Control Architecture. A: DAC consists of three tightly coupled layers: reactive, adaptive and contextual. The reactive layer endows a behaving system with a prewired repertoire of reflexes (low complexity unconditioned stimuli and responses - US,UR) that enable it to display simple adaptive behaviours. The activation of any reflex, however, also provides cues for learning that are used by the adaptive layer via representations of internal states, i.e. aversive and appetitive. The adaptive layer provides the mechanisms for the adaptive classification of sensory events (conditioned stimulus - CS), using the predictive Hebbian learning rule, and the reshaping of responses (conditioned responses - CR) and is a model of classical conditioning. The sensory and motor representations formed at the level of adaptive control provide the inputs to the contextual layer that acquires, retains, and expresses sequential representations using systems for short- and long-term memory. The contextual layer describes goal-oriented learning as observed in operant conditioning. B: The Khepera micro-robot (K-team, Lausanne) used in several of the DAC experiments measures about 55 by 30 mm and is equipped with a colour CCD camera and active infra-red sensors that allow it to detect collisions and ambient light levels. C: One example of the application of DAC to robot random foraging. In these tasks the robot is freely exploring an environment that contains obstacles and targets. Performance of the system is assessed by measuring its ability to optimize the number of targets found while minimizing the number of collisions suffered, combined with recall tests which assess the ability of the robot to revisit areas where reward was found in the past. The robot learns to use the colour information in the environment, the patches on the floor and the walls, to acquire the shortest route between goal locations, that is, light sources. Here the trajectory of the robot during about 8 minutes of a recall test is shown for environments containing one to three targets (row 1-3) comparing a condition where the contextual layer is disabled (first column) versus one where it is enabled (second column). For both conditions the robots could learn for 30 minutes. The environment measures about 1.5 by 0.8 m.

From: P. F. M. J. Verschure and T. Voegtlin and R. J. Douglas (2003).  Environmentally mediated synergy between perception and behaviour in mobile robots. Nature. 425, 620-624.