Disentanglement challenge

The NeurIPS2019 challenge on learning disentangled representations


The success of machine learning algorithms depends heavily on the representation of the data. It is widely believed that a good representation is distributed, invariant and disentangled. This challenge focuses on disentangled representations where explanatory factors of the data tend to change independently of each other.

Why disentanglement?

Independent codes are proved to be useful in different areas of machine learning such as Causal Inference [1], Reinforcement Learning [2], Transfer Learning [3], Efficient Coding [4], and Neuroscience [5]. At the same time, there is still a lot of room for improvement for learning algorithms that find disentangled representations of data [6].

Real world challenge

Going from simulations to real world datasets introduces challenges that must be taken care of when learning algorithms are deployed in the wild. This challenge introduces a real world dataset possessing different aspects of imperfections such as camera distortion, light unevenness, colour and texture inhomogeneity, etc.


We built a mechanical device to generate the dataset for this challenge. It consists of a robotic arm connected to a 3D printed headpiece. Different factors of variations are generated by moving the arm, rotating the bottom plate, and changing colour of the headpiece and the background circular stripe.

How to participate

The open source library disentanglement-lib provides a variety of models and evaluation metrics, making it easy for participants to get started.

The challenge will be hosted on AICrowd. Check back soon for more detailed instructions.

Important Dates

June 10, 2019: First round starts

July 15, 2019: Second round starts

August 30, 2019: Paper submission deadline

September 10, 2019: Challenge ends

September 30, 2019: Evaluation starts

October 15, 2019: Paper submission deadline


All participants are required to publish their method on OpenReview.



Stefan Bauer (MPI)

Manuel Wüthrich (MPI)

Francesco Locatello (MPI, ETH)

Alexander Neitz (MPI)

Arash Mehrjou (MPI)

Djordje Miladinovic (ETH)

Waleed Gondal (MPI)

Olivier Bachem (Google Brain)

Martin Breidt (MPI)

Valentin Volchkov (MPI)

Joel Bessekon Akpo (MPI)

Yoshua Bengio (Mila)

Karin Bierig (MPI)

Bernhard Schölkopf (MPI)


[1] Peters, Jonas, Dominik Janzing, and Bernhard Schölkopf. Elements of causal inference: foundations and learning algorithms. MIT press, 2017.

[2] Lesort, T., Díaz-Rodríguez, N., Goudou, J. F., & Filliat, D. (2018). State representation learning for control: An overview. Neural Networks.

[3] Achille, Alessandro, et al. "Life-long disentangled representation learning with cross-domain latent homologies." Advances in Neural Information Processing Systems. 2018.

[4] Olshausen, Bruno A., and David J. Field. "Sparse coding of sensory inputs." Current opinion in neurobiology 14.4 (2004): 481-487.

[5] Barlow, Horace B. "Possible principles underlying the transformation of sensory messages." Sensory communication 1 (1961): 217-234.

[6] Locatello, F., Bauer, S., Lucic, M., Gelly, S., Schölkopf, B., & Bachem, O. (2018). Challenging common assumptions in the unsupervised learning of disentangled representations. arXiv preprint arXiv:1811.12359.