Why are we more afraid of Terminator than the real risk of algorithmic bias? How can the basic understanding of AI in the many sectors that are applying and relying on it? Is there a way to operationalize trust in AI?
In this episode of IBM thinkLeaders podcast, we are joined by guests Nicolas Economou (CEO of H5) & Sarah Judd (Curriculum Manager for open learning at AI4ALL). We talk to Nicolas and Sarah about the importance of auditing AI, increasing AI education for underrepresented groups, and ensuring that democratic values are baked into AI.
Connect with us @IBMthinkLeaders
“One of the main reasons I got into teaching AI to high schoolers is to have a populace that is more informed about what is actually scary and what is not actually scary and how we can mitigate the risks of the things that are actually scary.” Sarah Judd, Curriculum Manager for open learning at AI4ALL
“It seems like in many, many areas we're falling into a sort of a, what I call an efficiency trap, where we rely on AI to produce results quickly. Forgetting that along the way it may be that the decisions or predictions being made do not uphold the values that we care about in a democratic order.” -Nicolas Economou, CEO of H5