Starting 2024 off on a positive note! Our paper, When Text and Speech are Not Enough: A Multimodal Dataset of Collaboration in a Situated Task, has been accepted into the Journal of Open Humanities Data.
This paper explores the nuances of human interactions, emphasizing the limitations of relying solely on speech or text. It introduces a multimodal dataset that includes elements like gesture, gaze, joint attention, and user-interaction modeling. These diverse channels provide a comprehensive view of collaborative tasks.
You can check the paper out here.