
This Friday, we'll discuss "What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes": "In multi-agent systems, robust processes can emerge that are not particularly sensitive to which agents carry out which parts of the process. I call these processes Robust Agent-Agnostic Processes (RAAPs), and claim that there are at least a few bad RAAPs that could pose existential threats to humanity as automation and AI capabilities improve. Wars and economies are categories of RAAPs that I consider relatively “obvious” to think about, however there may be a much richer space of AI-enabled RAAPs that could yield existential threats or benefits to humanity. Hence, directing more x-risk-oriented AI research attention toward understanding RAAPs and how to make them safe to humanity seems prudent and perhaps necessary to ensure the existential safety of AI technology. Since researchers in multi-agent systems and multi-agent RL already think about RAAPs implicitly, these areas present a promising space for x-risk oriented AI researchers to begin thinking about and learning from." Post: https://www.alignmentforum.org/posts/LpM3EAakwYdS6aRKf/what-multipolar-failu... Summary in the Alignment Newsletter: https://www.alignmentforum.org/posts/AwxBGFy59DYDk4ooe/an-146-plausible-stor... We'll meet Friday at 1. https://oregonstate.zoom.us/j/2739792686?pwd=VkRUeHJkYnhvTzlvZzR6YnZWNERKQT0... Alex Turner