Talk page

Title:
Towards a theory of encode/decoder architectures

Speaker:
Andrej Risteski

Abstract:
A common choice of architecture in representation learning (i.e. learning a good "embedding" of the data) is an "encoder/decoder" architecture -- which tries to map a part of the input into a good latent representation (via an "encoder"), and predict the remaining part of the input (via a "decoder"). Two common examples are universal machine translation: where one tries to learn to translate between any pair of a set of languages via a common "latent language", given paired up corpora for only a part of the pairs; and contextual encoders -- where one tries to predict a part of the image, given the rest of the image.

Link:
http://scgp.stonybrook.edu/video_portal/video.php?id=4389

Workshop:
Simons- Program: Neural networks and the Data Science Revolution: from theoretical physics to neuroscience, and back