Workshop 2013 challenge

The 2013 Multi-modal Challenge workshop will be held on Dec. 9th in the Level 3 Seminar Room of the NICTA Australian Techonology Park Laboratory, Redferm, in conjunction with ICMI 2013, will be devoted to the presentation of most recent and challenging techniques from multi-modal gesture recognition. The program is detailed next:

9:00h Opening: Presentation of the workshop

9:15h Invited speaker I: Leonid Sigal, Disney Research

Title: Action Recognition and Understanding: Latest Challenges and Opportunities

10:00h Challenge results presentation

Multi-modal Gesture Recognition Challenge 2013: Dataset and Results

Sergio Escalera, Jordi Gonzàlez, Xavier Baró, Miguel Reyes, Oscar Lopes, Isabelle Guyon, Vassilis Athistos, Hugo J.Escalante

10:30h Coffee break I

11:00h Invited speaker II: Cristian Sminchisescu, Lund University

Title: Human Actions and 3D Pose in the Eye: From Perceptual Evidence to Accurate Computational Models

11:45h Presentations I: Multi-modal Gesture Recognition Challenge I and award ceremony

Fusing Multi-modal Features for Gesture Recognition (1st Prize)

Jiaxiang Wu, Jian Cheng, Chaoyang Zhao and Hanqing Lu

A Multi Modal Approach to Gesture Recognition from Audio and Video Data (3rd Prize)

Immanuel Bayer, Thierry Silbermann

12:30h Lunch break (included and sponsored by National ICT Australia)

13:30h Invited speaker III: Antonis Argyros, Univ. of Crete, Institute of Computer Science

Title: Tracking the articulated motion of human hands

14:15h Presentations II: Multi-modal Gesture Recognition Challenge II

Online RGB-D Gesture Recognition with Extreme Learning Machines

Xi Chen and Markus Koskela

A Multi-modal Gesture Recognition System Using Audio, Video, and Skeletal Joint Data

Karthik Nandakumar, Kong-Wah Wan, Jian-Gang Wang, Wen Zheng Terence Ng, Siu Man Alice Chan and Wei-Yun Yau

15:15h Invited speaker IV: Richard Bowden, University of Surrey

Title: Recognising spatio-temporal events in video

16:00h Coffee break II

16:30h Presentations III: Challenge for Multimodal Mid-Air Gesture Recognition for close HCI

ChAirGest - A Challenge for Multimodal Mid-Air Gesture Recognition for close HCI

Simon Ruffieux, Denis Lalanne and Elena Mugellini

ChAirGest - Gesture Spotting and Recognition Using Saliency Detection and Concatenated HMMs

Ying Yin and Randall Davis

17:30h Presentations IV: Multimodal Gesture Recognition Applications

Multi-modal Social Signal Analysis for Predicting Agreement in Conversation Settings

Víctor Ponce-López, Sergio Escalera, and Xavier Baró

Multi-modal Descriptors for Multi-class Hand Pose Recognition in Human Computer Interaction Systems

Jordi Abella, Raúl Alcaide, Anna Sabaté, Joan Mas, Sergio Escalera, Jordi Gonzàlez, Coen Antens

18:15h Closing: Conclusions the workshop