Abstract:
Optimizing noisy functions online, when evaluating the objective requires experiments on a deployed system, is a crucial task arising in manufacturing, robotics and many others. Often, constraints on safe inputs are unknown ahead of time, and we only obtain noisy information, indicating how close we are to violating the constraints. Yet, safety must be guaranteed at all times, not only for the final output of the algorithm.
Firstly, we introduce a general approach for seeking a stationary point in high dimensional non-linear stochastic optimization problems in which maintaining safety during learning is crucial. Our approach called LB-SGD, is based on applying stochastic gradient descent (SGD) with a carefully chosen adaptive step size to a logarithmic barrier approximation of the original problem. Our approach yields efficient updates and scales better with dimensionality compared to existing approaches. Beyond synthetic benchmarks, we demonstrate the effectiveness of our approach on minimizing constraint violation in policy search tasks in safe reinforcement learning (RL).
Secondly, for a specific setting of a safe learning with a single smooth constraint, we introduce an alternative primal-dual safe learning method. We show that our primal-dual algorithm can achieve a better convergence rate and is less noise-sencitive than current state-of-the-art approaches for this setting.
Laboratory for Simulation and Modelling
SDSC Hub @ PSI