Ph.D. candidate, Carnegie Mellon University
Foundations for machine learning by the people, for the people — with a focus on learning in economic mechanisms
Machine learning is the study of the design and analysis of algorithms that compute general facts about an underlying data-generating process by observing limited amounts of that data. Classically, the outcome of a learning algorithm is considered in isolation from the effects that it may have on the process that generates the data or computes the outcome. With data science and the applications of machine learning revolutionizing the day-to-day life, however, increasingly more people and organizations interact with learning systems. It is essential to account for the wide variety of social and economical limitations, aspirations, and behaviors demonstrated by these people and organizations that fundamentally change the nature of learning tasks and the challenges involved. My research on the theoretical aspects of machine learning and economics focuses on developing a foundation for machine learning that accounts for these interactions at every step of the way — in short, developing theoretical foundations for machine learning by the people, for the people.
In this poster, I describe one of my works in this area that investigates how one can optimize parameters of an auction, or more generally economic mechanisms, by learning customers’ preferences. A key challenge in this space is that customer preferences may evolve over time as a result of their earlier interactions with the auction, e.g., buyers who recently purchased a one-year supply of an item may not be interested in the same item, even at a lower price. Existing non-adaptive tools for optimizing auction parameters do not take these fluctuations into account and may output severely suboptimal solutions. In our work, we introduce adaptive learning algorithms for optimizing parameters of auctions that perform well in an ever-changing environment. By design, our adaptive algorithms use little overhead to tap into existing non-adaptive tools, designed to find optimal auctions on historical data, thereby achieving robustness to changes in the environment without needing to build new specialized tools from scratch.
Nika Haghtalab is a Ph.D. candidate at the Computer Science department of Carnegie Mellon University, co-advised by Avrim Blum and Ariel Procaccia. Her research lies in the intersection of theory of machine learning, computational aspects of economics, and algorithms, with a focus on designing machine learning and optimization algorithms that account for a wide range of social and economic interactions. Nika is a recipient of the IBM fellowship (2015-2016) and Microsoft Research Ph.D. fellowship (2016-2018). She was a research intern at Microsoft Research-Redmond in summer 2015, Microsoft Research-NYC in summer 2016, and a visiting student researcher at Stanford, hosted by Tim Roughgarden, in spring 2017.