AI Lunch Seminar

Tuesday, January 26, 2016 - 12:00pm to 1:00pm


3305 Newell-Simon Hall


GUS Guangyu XIA, Ph.D. Student

As both a computer scientist and a musician, I design intelligent systems to understand and extend human musical expression. To understand means to model the musical expression conveyed through acoustic, gestural, and emotional signals. To extend means to use this understanding to create expressive, interactive, and autonomous agents, serving both amateur and professional musicians.  In particular, I create interactive artificial performers that are able to perform expressively in concert with humans by learning musicianship from rehearsal experience. This study unifies machine learning and knowledge representation of music structure and performance skills in an HCI framework. In this talk, I will go over the learning techniques and present robot musicians capable of playing collaboratively and reacting to musical nuance with facial and body gestures.
About the Speaker.

Event Website:

For More Information, Contact: