A cognitive approach to goal-level imitation
Imitation in robotics is seen as a powerful means to reduce the complexity of robot programming. It allows users to instruct robots by
simply showing them how to execute a given task. Through imitation robots can learn from their environment and adapt to it just as human
newborns do. Despite different facets of imitative behaviours observed in humans and higher primates, imitation in robotics has usually been
implemented as a process of copying demonstrated actions onto the movement apparatus of the robot. While the results being reached are
impressive, we believe that a shift towards a higher expression of imitation, namely the comprehension of human actions and inference of its
intentions, is needed. In order to be useful as human companions, robots must act for a purpose by achieving goals and fulfilling human
expectations. In this paper we present ConSCIS (Conceptual Space based Cognitive Imitation System), an architecture for goal-level imitation
in robotics where the focus is put on final effects of actions on objects. The architecture tightly links low-level data with high-level
knowledge, and integrates, in a unified framework, several aspects of imitation, such as perception, learning, knowledge representation,
action generation and robot control. Some preliminary experimental results with an anthropomorphic arm/hand robotic system are shown.