Aadirupa Saha (TTIC) - Battling Bandits: Exploiting Preference Feedback for Efficient Information Aggregation
Customer statistics collected in several real-world systems have reflected that users often prefer eliciting their liking for a given pair of items, say (A,B), in terms of relative queries like: “Do you prefer Item A over B?”, rather than their absolute counterparts: “How much do you score items A and B on a scale of [0-10]?”.
Drawing inspirations, in the search for a more effective feedback collection mechanism, this led to the famous formulation of Dueling Bandits (DB), which is a widely studied online learning framework for efficient information aggregation from relative / comparative feedback. However despite the novel objective, unfortunately, most of the existing DB techniques were limited only to simpler settings of finite decision spaces, and stochastic environments, which are unrealistic in practice.
In this talk, we will start with the basic problem formulations for DB and familiarize ourselves with some of the breakthrough results. Following this, will dive deep into a more practical framework of contextual dueling bandits (C-DB) where the goal of the learner is to make customized predictions based on the user contexts: We will see a new algorithmic approach that can efficiently achieve the optimal regret performance for this problem, resolving an open problem from Dudík et al. [COLT, 2015]. We will conclude the talk with some interesting open problems.
[The discussion on C-DB setup is based on a joint work with Akshay Krishnamurthy (MSR, NYC), ALT 2022]
Speakers
Aadirupa Saha
Aadirupa Saha is visiting faculty at TTI Chicago. Before this, she was a postdoctoral researcher at Microsoft Research New York City. She obtained her Ph.D. from the Department of Computer Science, Indian Institute of Science, Bangalore, advised by Aditya Gopalan and Chiranjib Bhattacharyya. Aadirupa was an intern at Microsoft Research, Bangalore, Inria, Paris, and Google AI, Mountain View.
Her research interests include Bandits, Reinforcement Learning, Optimization, Learning theory, Algorithms. Off late, she is also very interested in working on problems in the intersection of ML and Game theory, Algorithmic fairness, and Privacy.