Welcome to our Multimodal Interactive Intelligence Laboratory (MIIL). In MIIL, we are dedicated to creating intelligent systems that can comprehend and engage in meaningful, context-aware conversations while seamlessly interacting with their surroundings. To this end, we recognize the significance of multimodal AI, which combines speech, vision, gesture, and touch, to enable natural understanding of our world. Moreover, we understand the critical role of conversational and embodied AI in realizing Artificial General Intelligence, the ultimate goal of creating machines that possess comprehensive intelligence and interact back with the world in a human-like manner.

Join us in our pursuit of Artificial General Intelligence as we explore the frontiers of multimodal interactive intelligence, advance the capabilities of conversational and embodied AI, and pave the way towards a future where machines possess human-like intelligence and interact with us in a truly natural and embodied manner.

Latest News

Publication

A paper accepted to Findings of ACL 2026.

Professional Service

Paul will serve as an Area Chair for NeurIPS 2026.

Publication

Two papers, one in Findings, accepted to CVPR 2026.

Professional Service

Paul will serve as an Area Chair for ECCV 2026.

Professional Service

Paul will serve as an Finance Chair for KCCV 2026.

Professional Service

Paul will serve as an Area Chair for ICML 2026.

Professional Service

Paul will serve as an Area Chair for CVPR 2026.

Professional Service

Paul will serve as an Area Chair for ICLR 2026.