[Ai] Alignment Reading Group: Objective Robustness in Deep Reinforcement Learning