
Hello everyone, The next alignment reading group meeting will again be at 2 PM PST this Friday. We will discuss Objective Robustness in Deep Reinforcement Learning <https://arxiv.org/abs/2105.14111>. Anyone interested in the reliability and safety of current reinforcement learning systems should feel free to attend. Paper abstract:
We study objective robustness failures, a type of out-of-distribution robustness failure in reinforcement learning (RL). Objective robustness failures occur when an RL agent retains its capabilities out-of-distribution yet pursues the wrong objective. This kind of failure presents different risks than the robustness problems usually considered in the literature, since it involves agents that leverage their capabilities to pursue the wrong objective rather than simply failing to do anything useful. We provide the first explicit empirical demonstrations of objective robustness failures and present a partial characterization of its causes.
All the best, Quintin