IJCNN 2017 Explainability of Learning Machines
Special Session on Explainability of Learning Machines
May, 2017 (exact day TBA)
** PREPARE YOUR PAPERS NOW: November 15, 2016, submission deadline **
(submit through the IJCNN website; and select our special session)
Research progress in machine learning and pattern recognition has led to a variety of modeling techniques with (almost) human-like performance in a variety of tasks. A clear example of this type of models are neural networks, whose deep variants dominate the arenas of computer vision and natural language processing among other fields. Although this type of models have obtained astounding results in a variety of tasks (e.g., face recognition with facenet ), they are limited in their explainability and interpretability. That is, in general, users cannot say too much about:
- What is the rationale behind the decision made? (explainability)
- What in the model structure explains its functioning? (interpretability)
This, in turn, raises multiple questions about decisions (why a decision is preferred over another, and how confident is the learning machine in its decision, the series of steps that led the learning machine to a given decision) and model structure (why a determined parameter configuration was chosen, what do parameters mean, how a user could interpret the learned model, what additional knowledge would be required from the user/world to improve the model). Hence, while good performance is a critical required characteristic for learning machines, explainability/interpretability capabilities are highly needed if one wants to take learning machines to the next step, and in particular include them into decision support systems involving human supervision (for instance in medicine or in security). It is only recently that there have been efforts from the community in this direction, see e.g., [2,3,4] (there is even an open call in this topic from DARPA, see ), therefore we think it is the perfect time to organize a special session around this relevant topic.
We organize a special session on explainable machine learning. This session aims at compiling the latest efforts and research advances from the scientific community in enhancing traditional machine learning algorithms with explainability capabilities at both the learning and decision stages. Likewise the special session targets novel methodologies and algorithms implementing explanatory mechanisms.
We foresee that this special session will capture a snapshot of cutting edge research on explainable learning machines and will serve to identify priority research directions in this interesting and novel research topic.
The scope of the special session comprises all aspects of explainability of learning machines, including but not limited to the following topics:
- Explainability of all aspects of machine learning techniques for classification, regression, clustering, feature selection & extraction, ensemble learning, deep learning, etc.
- Generation of explanations from the outputs of traditional learning machines.
- Explainability of learned (trained) models for specific tasks.
- Training, learning procedures leading to explainable models.
- Natural language explanations of decisions taken by learning machines.
In addition, because of the theme of an associated competition (under evaluation) we consider the following topics also relevant for the special session:
- Automatic personality analysis.
- Social signal processing.
- Automated personality profiling.
- Automated job interviews.
- November 15th : SS submission deadline
- January 20th: Decision notification
- February 20th: Camera ready submission
Please prepare and submit your paper according to the guidelines in:
Make sure to select the special session on Explainability and Machine Learning
- Isabelle Guyon (ChaLearn, Université Paris Saclay)
- Hugo Jair Escalante (ChaLearn, INAOE)
- Sergio Escalera (UB, CVC)
- Evelyne Viegas (Microsoft Research)
- Yağmur Güçlütürk (Radboud University)
- Umut Güçlü (Radboud University)
- Marcel van Gerven (Radboud University)
- Rob van Lier (Radboud University)
July 1: ChaLearn LAP and FotW Challenge and Workshop @ CVPR2016 Was a success!, thank you very much for your interest and participation (photos coming soon).
June 30: Joint Contest on Multimedia Challenges Beyond Visual Analysis @ICPR16 Started. Featuring four interesting competitions.
January 25: Tracks 1 (Apparent Age Estimation), 2 (Accessories Classification) and 3 (Smile and Gender Classification) started. Enjoy the Challenge!
Thanks to our SS IJCNN 2017 sponsors: Microsoft Research, ChaLearn, University of Barcelona, INAOE, Unviersite Paris Saclay, and more TBA. This research has been partially supported by projects TIN2012-39051 and TIN2013-43478-P.