End-to-End Listening Agent for Audiovisual Emotional and Naturalistic Interactions

Files
Self archived version
published versionDate
2018Author(s)
Unique identifier
10.7559/citarj.v10i2.424Metadata
Show full item recordMore information
Self-archived article
Citation
El Haddad, Kevin. Rizk, Yara. Heron, Louise. Hajj, Nadine. Zhao, Yong. Kim, Jaebok. Ngo Trong Trung. Lee, Minha. Doumit, Marwan. Lin, Payton. Kim, Yelin. Cakmak, Huseyin. (2018). End-to-End Listening Agent for Audiovisual Emotional and Naturalistic Interactions. Journal of science and technology of the arts, 10 (2) , 49-61. 10.7559/citarj.v10i2.424.Licensed under
Abstract
In this work, we established the foundations of a framework with the goal to build an end-to-end naturalistic expressive listening agent. The project was split into modules for recognition of the user’s paralinguistic and nonverbal expressions, prediction of the agent’s reactions, synthesis of the agent’s expressions and data recordings of nonverbal conversation expressions. First, a multimodal multitask deep learning-based emotion classification system was built along with a rule-based visual expression detection system. Then several sequence prediction systems for nonverbal expressions were implemented and compared. Also, an audiovisual concatenation-based synthesis system was implemented. Finally, a naturalistic, dyadic emotional conversation database was collected. We report here the work made for each of these modules and our planned future improvements.