2013 Multi-modal Challenge

MULTI-MODAL GESTURE RECOGNITION

Workshop held on Monday Dec 9th in the Level 3 Seminar Room of

the NICTA Australian TechnologyPark Laboratory, Redfern,

in conjunction with ICMI 2013

[DOWNLOAD THE PAPER]

NEWS: You can check the pictures of the MMGR workshop at ICMI here.

NEWS: You can check a detailed description of the data and results here. If your methodology is applied to this dataset and your results are published, the proper reference to this dataset is: S. Escalera, J. Gonzàlez, X. Baró, M. Reyes, O. Lopes, I. Guyon, V. Athistos, H.J. Escalante, "Multi-modal Gesture Recognition Challenge 2013: Dataset and Results", ICMI 2013.

NEWS: You can check the Challenge results here and the evaluation procedure here. The organizers would like to thank the 54 participating teams and specially congratulate the winners. Hope you have enjoyed the challenge!

ChaLearn organizes in 2013 a challenge and workshop on multi-modal gesture recognition from 2D and 3D video data using Kinect, in conjunction with ICMI 2013, December 9-13, Sidney, Australia. Call for participation (pdf)

NEWS

December 17: You can check the pictures of the MMGR workshop at ICMI here.

October 28: You can check the final program for the MMGR workshop at ICMI here.

October 24: You can check a detailed description of the Chalearn Multi-Modal Gesture Recognition Challenge 2013 dataset and results here.

September 5: The results of the Challenge can be found here and the evaluation procedure here.

August 15: End of the quantitative competition.

August 2: participants of the 2013 ICMI CHALEARN Multi-modal Gesture Challenge are strongly encouraged to submit a paper in a Special Issue at JMLR.

August 1: We have released the labeled validation data and the encrypted test data. You will find it at the download page.

June 30: Release of MATLAB sample code for generating a Kaggle submission.

June 21: The submission page at Kaggle is open!

June 4: Release of validation data.

June 3: The conference submission website is open.

May 27: Release of development data.

April 30: Initial sample data for the 2013 Multi-modal Gesture Recognition Challenge released.

April 1, 2013: Initial sample data for the 2013 Multi-modal Gesture Recognition Challenge will be released on April 30.

Kinect is revolutionizing the field of gesture recognition given the set of input data modalities it provides, including RGB image, depth image (using an infrared sensor), and audio. Gesture recognition is genuinely important in many multi-modal interaction and computer vision applications, including image/video indexing, video surveillance, computer interfaces, and gaming. It also provides excellent benchmarks for algorithms. The recognition of continuous, natural signing is very challenging due to the multimodal nature of the visual cues (e.g., movements of fingers and lips, facial expressions, body pose), as well as technical limitations such as spatial and temporal resolution and unreliable depth cues.

The Multi-modal Challenge workshop will be devoted to the presentation of most recent and challenging techniques from multi-modal gesture recognition. The committee encourages paper submissions in the following topics (but not limited to):

-Multi-modal descriptors for gesture recognition

-Fusion strategies for gesture recognition

-Multi-modal learning for gesture recognition

-Data sets and evaluation protocols for multi-modal gesture recognition

-Applications of multi-modal gesture recognition

The results of the challenge will be discussed at the workshop. It features a quantitative evaluation of automatic gesture recognition from a multi-modal dataset recorded with Kinect (providing RGB images of face and body, depth images of face and body, skeleton information, joint orientation and audio sources), including 13,858 Italian gestures from several users. The emphasis of this edition of the competition will be on multi-modal automatic learning of a vocabulary of 20 types of Italian gestures performed by several different users while explaining a history, with the aim of performing user independent continuous gesture recognition combined with audio information.

Best workshop papers and top three ranked participants of the quantitative evaluation will be invited to present their work at ICMI 2013 and their papers will be published in the ACM proceedings. Additionally, there will be travel grants (based on availability) and the possibility to be invited to present extended versions of their works to a special issue in a high impact factor journal. Moreover, all three top ranking participants in both, quantitative and qualitative challenges will be awarded with a ChaLearn winner certificate and an economic prize (based on availability). We will also announce a best paper and best student paper awards among the workshop contributions.