Vivian Chu

  • Vivian Chu
    Vivian Chu

PhD Candidate, Georgia Institute of Technology
Email

Abstract
Teaching Robots About Human Environments: Leveraging Human Interaction to Efficiently Learn and Use Multimodal Object Affordances

For robots to operate in the real world, an unstructured environment with high levels of uncertainty, they need to be able to learn and adapt. Past work show that robots can successfully learn in situations where there is a single skill, but for a robot to truly work in environments alongside people, robots will need a framework to reason and learn throughout their lives. My research looks at using affordances as the foundation to provide robots with the ability to reason about action and effects, transfer knowledge, and communicate with people in novel environments. Specifically, the work focuses on building a library of adaptable multi-sensory affordance models of the world through interactive perception and human guidance. The key contributions of the work are (1) an algorithm for human-guided robot self-exploration, (2) a multi-sensory representation of affordances, and (3) robotic system that utilizes multi-sensory affordance networks to execute t asks.

Bio
Vivian Chu is a Robotics Ph.D. candidate at the Georgia Institute of Technology where she teaches robots how to perform tasks in human environments. Specifically, she develops algorithms and mathematical models that utilize human guidance and multisensory information for robots to build a library of knowledge of its environment. She is co-advised by Dr. Andrea L. Thomaz and Dr. Sonia Chernova. Vivian received her B.S. degree in Electrical Engineering and Computer Science from the University of California, Berkeley and her M.S.E. degree in Robotics from the University of Pennsylvania. At Penn, Vivian worked with Dr. Katherine Kuchenbecker’s, where she focused on grounding language through object manipulation and tactile sensing. Prior to graduate school, she worked at IBM Research, Almaden, where she focused on natural language processing (NLP) and intelligent information integration. Vivian has received awards and recognition for her research, including ‘Best Paper in Cognitive Robotics’ at the IEEE International Conference on Robotics and Automation (ICRA) in 2013 and being selected as a finalist for ‘Best technical advance in HRI’ at the International Conference on Human-Robot Interaction (HRI) in 2016. Vivian was also selected for “25 women in robotics you need to know about” in 2016 by Robohub and was an Google Anita Borg Scholar in 2014.

Vivian Chu’s Research webpage