Amicorc project

AMICORC will analyze recent research findings in human cognition, cognitive robotics and human robot interaction, and use them as the basis for developing new robot reasoning and interaction strategies. Computational architecture developed in AMICORC could be seen as context-to-data interpreter that endow machines to “reason” based on constantly changing perspectives. AMICORC will output the Theory of Constructed Robot Cognition (TCRC) as a new theory of information and generic framework for integrating human, robot and environmental perspectives on robot embodiment, interaction and adaptation. Such perspectives will change constantly through interaction within shared environment based on newly acquired, insufficient of partial information. AMICORC will result in a paradigm shift, moving away from sensed data toward contextual anticipation. For the purposes of this study, four distinct sources of social signals will be analyzed in multimodal interaction, including: face emotion recognition, level of loudness in the room, intensity of body movements and sentiment analysis applied on speech. In this way, the system will interpret social signals to generate hypotheses and output non-verbal signals using information visualization techniques to the person in interaction. As a proof-of-concept, the overall methodology will be implemented and tested in couple of testing scenarios on real, augmented or virtual social robot. During the experiments the teacher will be able to adapt the presentation style and achieve better rapport with the student. Usability evaluation will be based on the Wizard of Oz approach, allowing a teacher to interact with students through a robotic interface. Built-in functionalities of the robot will provide a degree of situational embodiment, self-explainability and context-driven interaction. The planned research will show in what way and to what extent a cognitive robot can be truly effective in technology-enhanced learning.


About PLEA

PLEA Core is a system backbone for a real time online emotion recognition. PLEA Core uses visual and audio modalities to reason about possible emotional state of the person in interaction through a multimodal information fusion algorithm. Both used modalities analyse the data input based on AI deep learning technology. PLEA Core is a computational basis of the PLEA affective robotic empathy. Based on acquired information the system can autonomously generate face expressions on the robot in real time. In this way the robot can respond with its own face expressions and provide non-verbal emotional feedback. More on: https://www.art-ai.io/programme/plea.