2011/12 One-shot Gesture Challenge

one-shot-learning gesture challenge

Can you do one-shot-learning?

Humans are capable of recognizing patterns like hand gestures after seeing just one example. Can machines do that too?

We organized a challenge on gesture and sign language recognition from video, mostly focusing on hand and arm gestures, although facial expressions and whole body motion may enter into account. Applications include recognizing signals for man-machine communication, translating sign languages for the deaf to hearing people, and computer gaming.

The challenge was organized by ChaLearn and sponsored in part by Microsoft (Kinect for Xbox 360). The submission website was hosted by Kaggle.com. Other sponsors include Texas Instrument. This effort was initiated by the DARPA Deep Learning program and was supported by the US National Science Foundation (NSF) under grants ECCS 1128436 and ECCS 1128296 , the EU Pascal2 network of excellence and the Challenges in Machine Learning foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.