Perceptive Animated Interfaces:

The Next Generation of Interactive Learning Tools

 

Ron Cole

(Center for Spoken Language Research, University of Colorado, Boulder)

 

 

We envision a new generation of human computer interfaces that engage users in natural face-to-face conversational interaction with intelligent animated characters in a variety of learning domains, including learning to read and learning new languages. These animated characters, or virtual tutors, will interact with learners much like sensitive and effective teachers through speech, facial expressions and hand and body gestures. The virtual tutor will use machine perception technologies, including face tracking, gaze tracking, emotion detection and speech understanding to interpret the users’ intentions and cognitive state, and respond to the user through presentation of various media objects in the learning environment (e.g., videos, images, text), or by conversing with the learner using speech accompanied by appropriate facial expressions and gestures.

 

The research uses CU Communicator, an environment for researching and developing spoken dialogue systems enabling completely natural, unconstrained, mixed-initiative spoken dialogues in specific task domains.  Spoken dialogue interaction in Communicator occurs via communication between users and various technology servers that communicate with each other—audio server, speech recognition, semantic parsing, language generation, speech synthesis and dialogue management. By adding computer vision and computer animation systems to the communicator architecture, we have transformed Communicator into a platform for research and development of perceptive animated interfaces.

 

Our research on perceptive animated interfaces occurs in the context of the Colorado Literacy Tutor, a large, multi-laboratory project that aims to improve student achievement in the state of Colorado by developing a comprehensive, computer-based literacy program. The program identifies students who have difficulty learning to read, and provides them with an individualized sequence of tutoring exercises in which a virtual tutor interacts with the student to teach and exercise foundational reading skills (e.g., phonological awareness, letter to sound decoding), reading out loud, and comprehension training.

 

My presentation will motivate the vision of perceptive animated interfaces in greater detail, describe the technical and practical challenges involved in developing, deploying and evaluating learning tools incorporating perceptive animated agents that behave like effective teachers, and provide demonstrations of the learning tools and their component technologies.