How might robot eye-gaze mirroring effect
human comfort levels?

Carnegie Mellon University: Human Robot Interaction
timeline

10 weeks

role

Researcher

team

Gerry. D'Ascoli
Michelle Lu
Sophia Timko

tools

Google Forms
Python
VoiceFlow

Tasked with conducting a novel study, our team of four graduate students aimed to investigate the effect of eye gaze mirroring on comfort and engagement in human-robot social interaction. Eye gaze is a particularly powerful nonverbal cue in communication. While existing research has examined the effects of mimicry, eye gaze, and nonverbal communication in human-robot interactions, our research brings an aspect of novelty by specifically focusing on mirrored eye gaze while holding other factors constant.

We hypothesized that human comfort level would increase while interacting with a robot if the robot projects the human’s eye gaze patterns back at them throughout the conversation. To test this hypothesis, we conducted a within-subjects experiment using gaze pattern as the independent variable and comfort level as the dependent variable. Participants were asked to partake in a mock job interview with our robot, “Talent,” where the conversation was divided into 2 segments: an experimental condition and a control condition.

introduction
DESIGN
results
Limitations
FUTURE WORK

eye-gaze and human-robot interaction

Eye gaze can convey attentiveness, signal interaction, and improve understanding in communication and human-to-human interaction, which is the primary method of nonverbal communication. As we continue to explore how robots can more naturally interact with humans, understanding the impact of social interaction skills such as eye gaze is critical. In the early stages of our project, we first took a deep dive into existing research on the topics of robot mimicry, eye gaze, eye tracking technology, nonverbal communication, and robot trust. These resources gave us a foundation from which we could design our research study.

For example, we designed the appearance of our robot to have round eye shapes and large irises after listening that these eye characteristics were perceived as the "friendliest" (Onuki, Tomomi, et al., 2013).  Existing research also showed that imitation (Shimada, Michihiro, et al., 2008) and direct eye gaze (Babel, Franziska, et al., 2021) have led to more favorable impressions in human-robot interactions. These were all things to take into consideration when developing our study.

a within-subjects
study design

hypothesis
Human comfort level will increase with a robot that mirrors their eye gaze.
Variables
Independent: Gaze pattern
Dependent: Comfort level
Objective Metrics
  • Length of eye contact
  • Length of gaze aversion
  • Polar Coordinates of gaze aversion
  • Frequency of gaze aversion
  • Frequency of blinking
Subjective Metrics
  • Likert scale sentiment rating
    (comfort, engagement, difficult, etc.)
  • Number of positive/negative comments in comment analysis

Experiment protocol

While existing research has examined many of the same factors as our project, our research brings an aspect of novelty by specifically focusing on mirrored eye gaze while holding other factors constant. Many studies have researched eye gaze in conjunction with other nonverbal cues, such as neck movements, head tracking, and posture. In addition, most of these studies use trust as the dependent variable, but we targeted comfort level as our subjective metric.
  • Participants complete a pre-study questionnaire with demographic information, and give consent.
  • Participants calibrate eye-tracking glasses by introducing themselves to Talent, the robot.
  • Talent begins part 1 of the interview with participants randomly assigned to start either the control condition (no eye gaze mirroring or the experimental condition (eye gaze mirroring).
  • Participant completes the post-study questionnaire for part 1.
  • Talent begins part 2 of the interview with the alternate condition (experimental vs. control).
  • Participant completes the post-study questionnaire for part 2.

Task Script

The task script, my primary responsibility, was designed to facilitate a conversation between “Talent” and a “job seeker,” i.e. the participant. Using VoiceFlow, a conversational assistant design platform, we developed several conversational paths that were later executed during the study as the voice of “Talent,” using a Wizard of Oz technique. The task script was broken into three segments, all consisting of common interview questions.
Introduction to Talent

The first segment served as a general introduction and allowed for a baseline capture of eye gaze data from the participant to calibrate the eye-tracking glasses.

Control/Experimental

Segments two and three, designed to follow the natural cadence of an interview, prompted users to answer questions. These served as the basis for the control and experimental groups.

Natural utterances

Throughout“Talent” would respond to the participants'  with utterances such as “That’s a great school!” and “Thank you for sharing.” to more closely mimic the natural exchanges of conversation.

Conversation paths

Using VoiceFlow, we were able to build multiple conversation paths in advance and select the path unique to the participants desired job title, while remaining consistent in the questions asked.

Robotics system development

Built by Gerry D'Ascoli and Michelle Lu, the experiment software is broken into three separate python scripts. The first interfaces with the Pupil Labs eye tracking Pupil Core glasses, the second script using Pygame (after pyglet proved unable to fit our automated update technical need) and the third script takes the models built up during each trial based on the subject’s gaze and loads them into a csv file for analysis.
Pupil Core Interface
The Pupil Core interface connects to pupil capture, subscribes to and reads the gaze messages published by the glasses, and builds the model based on the user’s gaze pattern from which we pull our objective metrics.
  • Connected to Pupil Capture
  • Read pupil gaze messages
  • Build gaze model for objective metrics
Robot Animation
The robot animation script was built in Pygame. It is essentially two  images overlayed, one of the robot and one of the robot’s pupils.
  • Built in Pygame
  • Two images overlayed
  • Eyes moved based on input polar coordinates (r,θ)
After exploring  different robot options in the AI Maker Space at Carnegie Mellon University, we ultimately decided to display our robot using computer graphics rather than a physical robot due to the limited access we had to robots whose eyes could be manually programmed.

Survey design

Designed by Sophia Timko, participants were asked to complete a survey after each trial was based on the 5-point Likert scale. Many of the questions were derived from RoSAS about comfort. To manage bias, the survey wording frames half the survey question positively and half negatively. To allow subjects to elaborate more, we asked 3 open-ended questions. and give opportunity to bring up factors that the survey didn't consider.

Study results and findings

10 Participants
  • Ages 22-30
  • 6 Male, 4 Female (self-identified)
  • MHCI, MBA, and MRSD programs
  • Interested in roles in UX design, UX research, robotics, and product management
Subjective Metrics
Two-sample paired t-tests performed for all 8 questions between the 2 conditions.
  • None of the results between the experimental and control were found to be statistically significant (p < .05)
Sign tests were performed for all 8 questions
  • Participants rated the experimental condition as more sociable (p = . 031) and more engaged (p = .035) than the control condition
  • Participants perceived the control as listening better than the experimental (p = .035)
Objective Metrics
Objective metrics were pulled from the PupilLabs eye-tracking glasses.
Two-sample paired t-tests were performed for all 4 metrics between the 2 conditions
  • None of the results between the experimental and control were found to be statistically significant (p < .05)
Eye Gaze Mirroring Shows Promise
  • A majority of the positive statements (comfortable, engaged, sociable) did see a rise in agreement for the experimental condition and a majority of the negative statements (awkward, dismissive, strange) saw a decrease in agreement for the experimental condition.
  • When asked to describe their interactions with the robots, participants used more “positive words” after completing the experimental session than they did after completing the control and used less negative words after completing the experimental than they did after the control, suggesting an overall more positive impression of the robot that used eye gaze mirroring.
  • Some comments that were shared also reflected participants’ feeling that the experimental condition was more natural than the control condition.

Study
limitations

Novelty effect
The glasses are so cool!”. While the glasses were essential to tracking eye-gaze, we believe they led unnatural postures and mannerisms.
Environment
Having all of the group members in the may have led to feeling of pressure and discomfort.
Technical difficulties
We had some technical difficulties with the eye tracking glasses, that may have influenced the participants overall experience.
Simplified software
    We had to discretize aspects of the gaze instead of learning a generalized model for user’s gaze.

    Considerations for
    future work

    Experiment autonomy
    More autonomous experiment without the need to Wizard of Oz components
    Improve gaze recognition
    Improve gaze recognition to allow for more natural user data
    Simplified software
    Using physical robot rather than computer graphics