Software Engineer, Google Research
Small Models for Big Data
Many complex machine learning models such as deep neural networks have demonstrated tremendous success for big data. However, these advances are not necessarily feasible when deploying these models to devices due to large model size and evaluation cost. In many real-world applications such as robotics, self-driving car and smartphone apps, the learning tasks need to be carried out in a timely fashion on a computation and memory limited platform. Therefore, it is extremely important to study building “small” models from “big” machine learning models trained on big data. The main topic of my research is to investigate how to reduce the model size and speed up the elevation for complex machine learning models while maintaining similar accuracy. Specifically, I will discuss how to compress the model for different real world machine learning applications, including extreme multi-label learning and classification.
Si Si is a researcher and software engineer in Google research. Her research focus is developing scalable machine learning models. Si obtained her M.Phil. degree in 2010 from University of Hong Kong, and Ph.D. from University of Texas at Austin in 2016. She is the recipient of the MCD fellowship in 2010-2013, and the best paper award in ICDM 2012. She has served as steering committee for research women in Google and reviewers for many conferences and journals including ICML, NIPS, KDD and JMLR, etc.