Data Skeptic
Data Skeptic
Nov 27, 2020
Face Mask Sentiment Analysis
Play • 41 min

As the COVID-19 pandemic continues, the public (or at least those with Twitter accounts) are sharing their personal opinions about mask-wearing via Twitter. What does this data tell us about public opinion? How does it vary by demographic? What, if anything, can make people change their minds?

Today we speak to, Neil Yeung and Jonathan Lai, Undergraduate students in the Department of Computer Science at the University of Rochester, and Professor of Computer Science, Jiebo-Luoto to discuss their recent paper. Face Off: Polarized Public Opinions on Personal Face Mask Usage during the COVID-19 Pandemic.

Works Mentioned https://arxiv.org/abs/2011.00336

Emails: Neil Yeung nyeung@u.rochester.edu

Jonathan Lia jlai11@u.rochester.edu

Jiebo Luo jluo@cs.rochester.edu

Thanks to our sponsors!

  • Springboard School of Data offers a comprehensive career program encompassing data science, analytics, engineering, and Machine Learning. All courses are online and tailored to fit the lifestyle of working professionals. Up to 20 Data Skeptic listeners will receive $500 scholarships. Apply today at springboard.com/datasketpic
  • Check out Brilliant's group theory course to learn about object-oriented design! Brilliant is great for learning something new or to get an easy-to-look-at review of something you already know. Check them out a Brilliant.org/dataskeptic to get 20% off of a year of Brilliant Premium!
Learning Bayesian Statistics
Learning Bayesian Statistics
Alexandre ANDORRA
#31 Bayesian Cognitive Modeling & Decision-Making, with Michael Lee
I don’t know if you noticed, but I have a fondness for any topic related to decision-making under uncertainty — when it’s studied scientifically of course. Understanding how and why people make decisions when they don’t have all the facts is fascinating to me. That’s why I like electoral forecasting and I love cognitive sciences. So, for the first episode of 2021, I have a special treat: I had the great pleasure of hosting Michael Lee on the podcast! Yes, the Michael Lee who co-authored the book Bayesian Cognitive Modeling with Eric-Jan Wagenmakers in 2013 — by the way, the book was ported to PyMC3, I put the link in the show notes ;) This book was inspired from Michael’s work as a professor of cognitive sciences at University of California, Irvine. He works a lot on representation, memory, learning, and decision making, with a special focus on individual differences and collective cognition. Using naturally occurring behavioral data, he builds probabilistic generative models to try and answer hard real-world questions: how does memory impairment work (that’s modeled with multinomial processing trees)? How complex are simple decisions, and how do people change strategies? Echoing episode 18 with Daniel Lakens, Michael and I also talked about the reproducibility crisis: how are cognitive sciences doing, which progress was made, and what is still to do? Living now in California, Michael is originally from Australia, where he did his Bachelors of Psychology and Mathematics, and his PhD in psychology. But Michael is also found of the city of Amsterdam, which he sees as “the perfect antidote to southern California with old buildings, public transport, great bread and beer, and crappy weather”. Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ (https://bababrinkman.com/) ! Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Brian Huey, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, Demetri Pananos, James Ahloy, Jon Berezowski, Robin Taylor, Thomas Wiecki, Chad Scherrer, Vincent Arel-Bundock, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll and Nathaniel Burbank. Visit https://www.patreon.com/learnbayesstats (https://www.patreon.com/learnbayesstats) to unlock exclusive Bayesian swag ;) Links from the show: Michael's website: https://faculty.sites.uci.edu/mdlee/ (https://faculty.sites.uci.edu/mdlee/) Michael on GitHub: https://twitter.com/mdlBayes (https://twitter.com/mdlBayes) Bayesian Cognitive Modeling book: https://faculty.sites.uci.edu/mdlee/bgm/ (https://faculty.sites.uci.edu/mdlee/bgm/) Bayesian Cognitive Modeling in PyMC3: https://github.com/pymc-devs/resources/tree/master/BCM (https://github.com/pymc-devs/resources/tree/master/BCM) An application of multinomial processing tree models and Bayesian methods to understanding memory impairment: https://drive.google.com/file/d/1NHml_YUsnpbUaqFhu0h8EiLeJCx6q403/view (https://drive.google.com/file/d/1NHml_YUsnpbUaqFhu0h8EiLeJCx6q403/view) Understanding the Complexity of Simple Decisions -- Modeling Multiple Behaviors and Switching Strategies: https://webfiles.uci.edu/mdlee/LeeGluckWalsh2018.pdf (https://webfiles.uci.edu/mdlee/LeeGluckWalsh2018.pdf) Robust Modeling in Cognitive Science: https://link.springer.com/article/10.1007/s42113-019-00029-y (https://link.springer.com/article/10.1007/s42113-019-00029-y) This podcast uses the following third-party services for analysis: Podcorn - https://podcorn.com/privacy Support this podcast
1 hr 9 min
Machine Learning Street Talk
Machine Learning Street Talk
Machine Learning Street Talk
#038 - Professor Kenneth Stanley - Why Greatness Cannot Be Planned
Professor Kenneth Stanley is currently a research science manager at OpenAI in San Fransisco. We've Been dreaming about getting Kenneth on the show since the very begininning of Machine Learning Street Talk. Some of you might recall that our first ever show was on the enhanced POET paper, of course Kenneth had his hands all over it. He's been cited over 16000 times, his most popular paper with over 3K citations was the NEAT algorithm. His interests are neuroevolution, open-endedness, NNs, artificial life, and AI. He invented the concept of novelty search with no clearly defined objective. His key idea is that there is a tyranny of objectives prevailing in every aspect of our lives, society and indeed our algorithms. Crucially, these objectives produce convergent behaviour and thinking and distract us from discovering stepping stones which will lead to greatness. He thinks that this monotonic objective obsession, this idea that we need to continue to improve benchmarks every year is dangerous. He wrote about this in detail in his recent book "greatness can not be planned" which will be the main topic of discussion in the show. We also cover his ideas on open endedness in machine learning.  00:00:00 Intro to Kenneth  00:01:16 Show structure disclaimer  00:04:16 Passionate discussion  00:06:26 WHy greatness cant be planned and the tyranny of objectives  00:14:40 Chinese Finger Trap   00:16:28 Perverse Incentives and feedback loops  00:18:17 Deception  00:23:29 Maze example  00:24:44 How can we define curiosity or interestingness  00:26:59 Open endedness  00:33:01 ICML 2019 and Yannic, POET, first MSLST  00:36:17 evolutionary algorithms++  00:43:18 POET, the first MLST   00:45:39 A lesson to GOFAI people  00:48:46 Machine Learning -- the great stagnation  00:54:34 Actual scientific successes are usually luck, and against the odds -- Biontech  00:56:21 Picbreeder and NEAT  01:10:47 How Tim applies these ideas to his life and why he runs MLST  01:14:58 Keith Skit about UCF  01:15:13 Main show kick off  01:18:02 Why does Kenneth value serindipitous exploration so much  01:24:10 Scientific support for Keneths ideas in normal life  01:27:12 We should drop objectives to achieve them. An oxymoron?  01:33:13 Isnt this just resource allocation between exploration and exploitation?  01:39:06 Are objectives merely a matter of degree?  01:42:38 How do we allocate funds for treasure hunting in society  01:47:34 A keen nose for what is interesting, and voting can be dangerous  01:53:00 Committees are the antithesis of innovation  01:56:21 Does Kenneth apply these ideas to his real life?  01:59:48 Divergence vs interestingness vs novelty vs complexity  02:08:13 Picbreeder  02:12:39 Isnt everything novel in some sense?  02:16:35 Imagine if there was no selection pressure?  02:18:31 Is innovation == environment exploitation?  02:20:37 Is it possible to take shortcuts if you already knew what the innovations were?  02:21:11 Go Explore -- does the algorithm encode the stepping stones?  02:24:41 What does it mean for things to be interestingly different?  02:26:11 behavioral characterization / diversity measure to your broad interests  02:30:54 Shaping objectives  02:32:49 Why do all ambitious objectives have deception? Picbreeder analogy  02:35:59 Exploration vs Exploitation, Science vs Engineering  02:43:18 Schools of thought in ML and could search lead to AGI  02:45:49 Official ending
2 hr 46 min
Towards Data Science
Towards Data Science
The TDS team
66. Owain Evans - Predicting the future of AI
Most researchers agree we’ll eventually reach a point where our AI systems begin to exceed human performance at virtually every economically valuable task, including the ability to generalize from what they’ve learned to take on new tasks that they haven’t seen before. These artificial general intelligences (AGIs) would in all likelihood have transformative effects on our economies, our societies and even our species. No one knows what these effects will be, or when AGI systems will be developed that can bring them about. But that doesn’t mean these things aren’t worth predicting or estimating. The more we know about the amount of time we have to develop robust solutions to important AI ethics, safety and policy problems, the more clearly we can think about what problems should be receiving our time and attention today. That’s the thesis that motivates a lot of work on AI forecasting: the attempt to predict key milestones in AI development, on the path to AGI and super-human artificial intelligence. It’s still early days for this space, but it’s received attention from an increasing number of the AI safety and AI capabilities researchers. One of those researchers is Owain Evans, whose work at Oxford University’s Future of Humanity Institute is focused on techniques for learning about human beliefs, preferences and values from observing human behavior or interacting with humans. Owain joined me for this episode of the podcast to talk about AI forecasting, the problem of inferring human values, and the ecosystem of research organizations that support this type of research.
48 min
DeepMind: The podcast
DeepMind: The podcast
DeepMind: The podcast
8: Demis Hassabis: The interview
In this special extended episode, Hannah Fry meets Demis Hassabis, the CEO and co-founder of DeepMind. She digs into his former life as a chess player, games designer and neuroscientist and explores how his love of chess helped him to get start-up funding, what drives him and his vision, and why AI keeps him up at night. If you have a question or feedback on the series, message us on Twitter (@DeepMindAI (https://twitter.com/deepmindai?lang=en) using the hashtag #DMpodcast) or emailing us at podcast@deepmind.com (mailto:podcast@deepmind.com) . Further reading: Wired: Inside DeepMind's epic mission to solve science's trickiest problem (https://www.wired.co.uk/article/deepmind-protein-folding) Quanta magazine: How Artificial Intelligence Is Changing Science (https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/) Demis Hassabis: A systems neuroscience approach to building AGI. Talk at the 2010 Singularity Summit (https://www.youtube.com/watch?v=Qgd3OK5DZWI) Demis Hassabis: The power of self-learning systems. Talk at MIT 2019 (https://cbmm.mit.edu/video/power-self-learning-systems) Demis Hassabis: Talk on Creativity and AI (https://www.youtube.com/watch?v=d-bvsJWmqlc) Financial Times: The mind in the machine: Demis Hassabis on artificial intelligence (2017) (https://www.ft.com/content/048f418c-2487-11e7-a34a-538b4cb30025) The Times: Interview with Demis Hassabis (https://www.thetimes.co.uk/article/demis-hassabis-interview-the-brains-behind-deepmind-on-the-future-of-artificial-intelligence-mzk0zhsp8) The Economist Babbage podcast: DeepMind Games (https://play.acast.com/s/theeconomistbabbage/99af5224-b955-4a3c-930c-91a68bfe6c88?autoplay=true) Interview with Demis Hassabis (https://storage.googleapis.com/deepmind-media/podcast/Game%20Changer%20-%20Demis%20Hassabis%20Interview.pdf) from the book Game Changer (https://www.newinchess.com/game-changer) , which also features an introduction from Demis Interviewees: Deepmind CEO and co-founder, Demis Hassabis Credits: Presenter: Hannah Fry Editor: David Prest Senior Producer: Louisa Field Producers: Amy Racs, Dan Hardoon Binaural Sound: Lucinda Mason-Brown Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet) Commissioned by DeepMind
37 min
More episodes
Search
Clear search
Close search
Google apps
Main menu