The NeurIPS2019 challenge on learning disentangled representations
The success of machine learning algorithms depends heavily on the representation of the data. It is widely believed that a good representation is distributed, invariant and disentangled. This challenge focuses on disentangled representations where explanatory factors of the data tend to change independently of each other.
Independent codes are proved to be useful in different areas of machine learning such as Causal Inference , Reinforcement Learning , Transfer Learning , Efficient Coding , and Neuroscience . At the same time, there is still a lot of room for improvement for learning algorithms that find disentangled representations of data .
Real world challenge
Going from simulations to real world datasets introduces challenges that must be taken care of when learning algorithms are deployed in the wild. This challenge introduces a real world dataset possessing different aspects of imperfections such as camera distortion, light unevenness, colour and texture inhomogeneity, etc.
We built a mechanical device to generate the dataset for this challenge. It consists of a robotic arm connected to a 3D printed headpiece. Different factors of variations are generated by moving the arm, rotating the bottom plate, and changing colour of the headpiece and the background circular stripe.
How to participate
June 10, 2019: First round starts
July 15, 2019: Second round starts
August 30, 2019: Paper submission deadline
September 10, 2019: Challenge ends
September 30, 2019: Evaluation starts
October 15, 2019: Paper submission deadline
All participants are required to publish their method on OpenReview.
Stefan Bauer (MPI)
Manuel Wüthrich (MPI)
Francesco Locatello (MPI, ETH)
Alexander Neitz (MPI)
Arash Mehrjou (MPI)
Djordje Miladinovic (ETH)
Waleed Gondal (MPI)
Olivier Bachem (Google Brain)
Martin Breidt (MPI)
Valentin Volchkov (MPI)
Joel Bessekon Akpo (MPI)
Yoshua Bengio (Mila)
Karin Bierig (MPI)
Bernhard Schölkopf (MPI)
 Peters, Jonas, Dominik Janzing, and Bernhard Schölkopf. Elements of causal inference: foundations and learning algorithms. MIT press, 2017.
 Lesort, T., Díaz-Rodríguez, N., Goudou, J. F., & Filliat, D. (2018). State representation learning for control: An overview. Neural Networks.
 Achille, Alessandro, et al. "Life-long disentangled representation learning with cross-domain latent homologies." Advances in Neural Information Processing Systems. 2018.
 Olshausen, Bruno A., and David J. Field. "Sparse coding of sensory inputs." Current opinion in neurobiology 14.4 (2004): 481-487.
 Barlow, Horace B. "Possible principles underlying the transformation of sensory messages." Sensory communication 1 (1961): 217-234.
 Locatello, F., Bauer, S., Lucic, M., Gelly, S., Schölkopf, B., & Bachem, O. (2018). Challenging common assumptions in the unsupervised learning of disentangled representations. arXiv preprint arXiv:1811.12359.