2015 Looking At People ICCV challenge

ChaLearn LAP Challenge and Workshop @ ICCV2015

Apparent Age Estimation and Cultural Event Recognition

Centro Park Convention Center, Santiago de Chile

December 12, 2015

[DOWNLOAD THE PAPER]

ChaLearn organized for ICCV2015 two parallel quantitative challenge tracks on RGB data.

PRIZES: For each Track, the first, second and third winners were awarded 1500, 1000 and 500 dollars, respectively. In addition, the three winners of each track will receive a travel grant of 500 dollars and an NVIDIA Titan X device.

Track 1: Apparent Age Estimation: 5,000 images each displaying a single individual, labeled with the apparent age. Each image has been labeled by multiple individuals using a collaborative Facebook implementation. The votes variance is used as a measure of the error for the predictions. This is the first state of the art database for Apparent Age Recognition rather than Real Age recognition.

Track 2: Cultural Event Recognition: Near 30,000 images corresponding to 100 different cultural event categories will be considered. In all the categories, garments, human poses, objects and context will be possible cues to be exploited for recognizing the events, while preserving the inherent inter- and intra-class variability of this type of images. Examples of cultural events are Carnival, Oktoberfest, San Fermin, Maha-Kumbh-Mela, Aoi-Matsuri. Jordi Gonzàlez and Júnior Fabian gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for creating the baseline of this Track.

Figure 1: Samples of the Apparent Age Estimation tracks

Figure 2: Samples of the Cultural Event Classification track

Top three ranked participants on each track will be awarded and invited to follow the workshop submission guide for inclusion of a description of their system at the ICCV 2015 conference proceedings.

The sponsorship of Microsoft Research, University of Barcelona, Amazon, INAOE, VISADA, Google, NVIDIA corporation, Facebook, and Disney Research, are gratefully acknowledged. This research has been partially supported by projects TIN2012-39051 and TIN2013-43478-P.