Talk page

Title:
Multi-Output Prediction: Theory and Practice

Speaker:
Inderjit Dhillon

Abstract:
Many challenging problems in modern applications amount to finding relevant results from an enormous output space of potential candidates, for example, finding the best matching product from a large catalog or suggesting related search phrases on a search engine. The size of the output space for these problems can be in the millions to billions. Moreover, observational or training data is often limited for many of the so-called “long-tail” of items in the output space. Given the inherent paucity of training data for most of the items in the output space, developing machine learned models that perform well for spaces of this size is challenging. Fortunately, items in the output space are often correlated thereby presenting an opportunity to alleviate the data sparsity issue. In this talk, I will first discuss the challenges in modern multi-output prediction, including missing values, features associated with outputs, absence of explicit negative examples, and the need to scale up to enormous data sets. Bilinear methods, such as Inductive Matrix Completion~(IMC), enable us to handle missing values and output features in practice, while coming with theoretical guarantees. Nonlinear methods such as nonlinear IMC and DSSM (Deep Semantic Similarity Model) enable more powerful models that are used in practice in real-life applications. However, inference in these models scales linearly with the size of the output space. In order to scale up, I will present the Prediction for Enormous and Correlated Output Spaces (PECOS) framework, that performs prediction in three phases: (i) in the first phase, the output space is organized using a semantic indexing scheme, (ii) in the second phase, the indexing is used to narrow down the output space by orders of magnitude using a machine learned matching scheme, and (iii) in the third phase, the matched items are ranked by a final ranking scheme. The versatility and modularity of PECOS allows for easy plug-and-play of various choices for the indexing, matching, and ranking phases, and it is possible to ensemble various models, each arising from a particular choice for the three phases.

Link:
https://www.ias.edu/video/machinelearning/2020/0827-InderjitDhillon