Welcome to our Multimodal Interactive Intelligence Laboratory (MIIL). In MIIL, we are dedicated to creating intelligent systems that can comprehend and engage in meaningful, context-aware conversations while seamlessly interacting with their surroundings. To this end, we recognize the significance of multimodal AI, which combines speech, vision, gesture, and touch, to enable natural understanding of our world. Moreover, we understand the critical role of conversational and embodied AI in realizing Artificial General Intelligence, the ultimate goal of creating machines that possess comprehensive intelligence and interact back with the world in a human-like manner.

Join us in our pursuit of Artificial General Intelligence as we explore the frontiers of multimodal interactive intelligence, advance the capabilities of conversational and embodied AI, and pave the way towards a future where machines possess human-like intelligence and interact with us in a truly natural and embodied manner.

Latest News

Publication

Two papers accepted to NeurIPS 2024.

Professional Service

Paul will serve as an Area Chair for CVPR 2025.

Professional Service

Paul will serve as an Area Chair for ICLR 2025.

Professional Service

Paul will serve as a Senior Program Committee for AAAI 2025.

Publication

A paper accepted to ECCV 2024.

Professional Service

Paul will serve as an Area Chair for NeurIPS 2024.

Professional Service

Paul will serve as a Finance Chair for KCCV 2024.

Publication

Two papers accepted to CVPR 2024.