Eirin Evjen and Markus Anderljung, AI Safety - Human values aligned with AI part 1/2
Sep 21, 2019 · 28 min
Play episode

In this first episode of two, we talk about Human values and how we should start planning and implementing when we as humans start building artificial general intelligence (AGI). The goal of long-term artificial intelligence safety is to ensure that advanced AI systems are aligned with human values — that they reliably do things that people want them to do.

You will get the perspective of Eirin Evjen, Executive Director of Effective Altruism Norway and Markus Anderljung, Project Manager for Operations & Policy Engagement in Future of Humanity institute.

Clear search
Close search
Google apps
Main menu