What if we could take people completely out of the equation when planning military strikes? ‘Lethal autonomous weapons systems’ use artificial intelligence to identify, select and kill human targets without human intervention. Whilst with unmanned military drones, the decision to strike is made remotely by a human operator, in the case of lethal autonomous weapons the decision is made by algorithms. But how does this work, and what are the dangers of the proliferation of these weapons?
James is with Emilia Javorsky, a physician from the Future of Life Institute. Emilia takes us through the probabilities of a future with autonomous weapons, including the risks to our world and to the development of artificial intelligence.
For more Warfare content, subscribe to our Warfare newsletter here.
If you'd like to learn even more, we have hundreds of history documentaries, ad-free podcasts, and audiobooks at History Hit - subscribe today!
Email us at email@example.com
See acast.com/privacy for privacy and opt-out information.