Talk page

Title:
Sparsifying and Derandomizing the Johnson-Lindenstrauss Transform

Speaker:
Jelani Nelson

Abstract:
The Johnson-Lindenstrauss lemma states that for any n points in Euclidean space and error parameter 0<eps<1/2, there exists an embedding into k = O(eps^{-2} * log n) dimensional Euclidean space so that all pairwise distances are preserved up to a 1+eps factor. This lemma has applications in high-dimensional computational geometry (decreasing dimension makes many algorithms run faster), compressed sensing, and numerical linear algebra. All known proofs of the lemma construct a distribution over linear mappings so that a random such mapping suffices with high probability. In this talk, I will present various proofs of the JL lemma satisfying, for example, (1) the support of the distribution is small, so that a random embedding can be selected with few random bits (e.g. O(log n loglog n) bits for constant eps, which is suboptimal by a loglog n factor), and (2) every embedding matrix in the support of the distribution is sparse (only O(eps*k) entries per column are non-zero), to speed up computation. I will also describe some open problems. This talk is based on joint works with Daniel Kane (Harvard).

Link:
https://www.ias.edu/video/csdm/nelson