Abstract:
Optimal Experiment Design is a foundational area in statistics, closely tied to Active Learning in Machine Learning. The core idea is to estimate unknown quantities by strategically interacting with a system through carefully chosen queries, maximizing information within a limited budget. While traditional approaches assume any query can be chosen at any time, this talk explores a more dynamic scenario: as interactions progress, they change the experimenter's state, constraining future queries. These evolving states are modeled using a Markov chain, turning the process into a Markov Decision Process (MDP) with a non-linear reward function that adapts to specific experimental goals. I’ll discuss applications like spatial surveillance and chemical reactor optimization, linking this framework to optimal control theory. We’ll also address the computational challenges of the general problem and introduce practical approximation methods based on convex relaxation.
The Laboratory for Simulation and Modeling
SDSC Hub at PSI