Together with colleagues at the Tokyo University of Technology and Agriculture (Prof. Gentiane Venture, now with the University of Tokyo) and the Tokyo institute of technology (Dr. Jacqueline Urakami, now with Kyocera), Dr. Marie-Luce Bourguet is working on creating agents, tangible (robots) or virtual, that can support learning.
Related publications and presentations:
Bourguet M.L., Urakami, J. and Venture, G. (2022) Online Learners’ Cognitive-Affective States Awareness to support Wellbeing and Self-Regulation Skills, HRI4Wellbeing (Human-Robot Interaction for Wellbeing workshop, as part of the IEEE International Conference on Robot & Human Interactive Communication (RO-MAN 2022)), in Naples and online, 2 September 2022.
Abstract— This paper offers a brief literature review of the research on linking emotions and learning; techniques for measuring learners’ cognitive-affective states; and affective feedback for shaping learners’ self-regulation skills and wellbeing. It then presents the architecture of a system that can sense a learner’s cognitive-affective state, decide if remediation is needed, and react through an intelligent virtual agent. The system uses techniques from computer vision (facial expressions and body pose estimation), eye tracking and machine learning to capture the student’s cognitive-affective state (e.g., boredom, frustration, or confusion), level of fatigue and cognitive load, physical environment (e.g., location, presence of other people, potential for disruption and distraction), and current activity. A decision system then decides if remediation is needed and what it should achieve; and a virtual agent executes the intervention. The aim of the system is to support students’ wellbeing and help them develop their self-regulated learning skills.
Bourguet M.L., Urakami, J. and Venture, G. (2022) Data-driven Behavioural and Affective Nudging of Online Learners: System Architecture and Design. AIED 2022 (Artificial Intelligence in Education), Durham, 27-31 July 2022.Urakami, J., Bourguet M.L. and Venture, G. (2022) Robot public speakers’ effect on audience affective reaction and attention allocation. BCH HCI 2022 conference, Keele University, 11-13 July 2022.
Abstract—Social robots delivering public speeches have a wide range of practical applications as stand-ins for educators, experts, or entertainers. The goal of our work is to investigate how a social robot should be programmed to deliver an effective public speech. Applying a mixed methods research design to collect quantitative and qualitative data, we have conducted a study, which compares a human speaker with a semi-anthropomorphic social robot speaker (the SoftBank Pepper robot). The robot was programmed to mimic the behaviour patterns of the human delivering the same speech. The study results show that the robot is perceived as intelligent and rational, which contributes to effective delivery of the message content. However, the robot struggles with actively engaging the audience and with establishing an emotional connection. In addition, the behavioural patterns that appear natural in the human speaker tend to be distracting in the robot. Suggestions for the improved design of robot public speakers are discussed, which include implementing rhetoric skills, exploiting and synchronising the robot’s specific communication channels, and creating a robot persona.
Venture, G., Muraccioli, B., Bourguet, M.L., Urakami, J. (2022) Can robots be good public speakers? 16th ACM International Conference on Tangible Embedded and Embodied Interaction (TEI 2022), Seoul and online, 13-16 February 2022.
Abstract—Our research aims at understanding if robots could be good at public speaking, what they need to achieve the level of a good public speaker and how they may surpass a human public speaker. Previous research results indicate that designing a robot speaker by mimicking some of the behaviours of a human speaker is not enough to create an effective robot speech performance. It can in fact be counter productive to strive for humanlikeliness. In this paper, we describe how we programmed a toy-like, non-anthropomorphic small robot (the Anki Vector robot) to deliver a speech by extracting pose and facial expression information from the video of a human speaker and loosely retargeting this information to the robot. We also describe our experimental plans to compare Vector’s speech delivery performance with the performance of a more anthropomorphic robot (the SoftBank Pepper robot), which has been programmed to closely mimic the human speaker’s behaviour. They are compared in terms of their ability to evoke the positive affective responses necessary to spark interest, motivate the audience to listen, and engage the audience in meaningful ways.
Bourguet M.L., Jin, Y., Shi, Y., Chen, Y., Rincon, L., Venture, G. (2020) Social Robots that can Sense and Improve Student Engagement. IEEE Int. Conf. on Engineering, Technology and Education (TALE2020), online, 8-11 December 2020.Bourguet M.L., Xu, M., Zhang, S., Urakami, J., Venture, G. (2020) The Impact of a Social Robot Public Speaker on Audience Attention. 8th Int. Conf. on Human Agent Interaction, online, 10-13 November 2020.
Shi, Y., Chen, Y., Rincon Ardila, L., Venture, G. and Bourguet M.L. (2019) A Visual Sensing Platform for Robot Teachers. 7th International Conference on Human Agent Interaction, Kyoto, Japan, 6-10 October 2019.Bourguet M.L. & Venture, G. (2019) The challenges of working on educational social robots. Workshop on the challenges of working with social robots that collaborate with people, at The ACM CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4 May 2019.