Dec 10, 2020
1 - Adversarial Policies with Adam Gleave
Play • 59 min

In this episode, Adam Gleave and I talk about adversarial policies. Basically, in current reinforcement learning, people train agents that act in some kind of environment, sometimes an environment that contains other agents. For instance, you might train agents that play sumo with each other, with the objective of making them generally good at sumo. Adam's research looks at the case where all you're trying to do is make an agent that defeats one specific other agents: how easy is it, and what happens? He discovers that often, you can do it pretty easily, and your agent can behave in a very silly-seeming way that nevertheless happens to exploit some 'bug' in the opponent. We talk about the experiments he ran, the results, and what they say about how we do reinforcement learning.

Link to the paper - Adversarial Policies: Attacking Deep Reinforcement Learning

Link to the transcript

Adam's website

Adam's twitter account

Clear search
Close search
Google apps
Main menu