Data Skeptic
Data Skeptic
Jan 15, 2021
Even Cooperative Chess is Hard
Play • 23 min

Aside from victory questions like “can black force a checkmate on white in 5 moves?” many novel questions can be asked about a game of chess. Some questions are trivial (e.g. “How many pieces does white have?") while more computationally challenging questions can contribute interesting results in computational complexity theory.

In this episode, Josh Brunner, Master's student in Theoretical Computer Science at MIT, joins us to discuss his recent paper Complexity of Retrograde and Helpmate Chess Problems: Even Cooperative Chess is Hard.

Works Mentioned Complexity of Retrograde and Helpmate Chess Problems: Even Cooperative Chess is Hard by Josh Brunner, Erik D. Demaine, Dylan Hendrickson, and Juilian Wellman

1x1 Rush Hour With Fixed Blocks is PSPACE Complete by Josh Brunner, Lily Chung, Erik D. Demaine, Dylan Hendrickson, Adam Hesterberg, Adam Suhl, Avi Zeff

Learning Bayesian Statistics
Learning Bayesian Statistics
Alexandre ANDORRA
#34 Multilevel Regression, Post-stratification & Missing Data, with Lauren Kennedy
Episode sponsored by Tidelift: https://tidelift.com/ (tidelift.com) We already mentioned multilevel regression and post-stratification (MRP, or Mister P) on this podcast, but we didn’t dedicate a full episode to explaining how it works, why it’s useful to deal with non-representative data, and what its limits are. Well, let’s do that now, shall we? To that end, I had the delight to talk with Lauren Kennedy! Lauren is a lecturer in Business Analytics at Monash University in Melbourne, Australia, where she develops new statistical methods to analyze social science data. Working mainly with R and Stan, Lauren studies non-representative data, multilevel modeling, post-stratification, causal inference, and, more generally, how to make inferences from the social sciences. Needless to say that I asked her everything I could about MRP, including how to choose priors, why her recent paper about structured priors can improve MRP, and when MRP is not useful. We also talked about missing data imputation, and how all these methods relate to causal inference in the social sciences. If you want a bit of background, Lauren did her Undergraduates in Psychological Sciences and Maths and Computer Sciences at Adelaide University, with Danielle Navarro and Andrew Perfors, and then did her PhD with the same advisors. She spent 3 years in NYC with Andrew Gelman’s Lab at Columbia University, and then moved back to Melbourne in 2020. Most importantly, Lauren is an adept of crochet — she’s already on her third blanket! Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ (https://bababrinkman.com/) ! Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Brian Huey, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, Demetri Pananos, James Ahloy, Jon Berezowski, Robin Taylor, Thomas Wiecki, Chad Scherrer, Vincent Arel-Bundock, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll and Nathaniel Burbank. Visit https://www.patreon.com/learnbayesstats (https://www.patreon.com/learnbayesstats) to unlock exclusive Bayesian swag ;) Links from the show: Lauren's website: https://jazzystats.com/ (https://jazzystats.com/) Lauren on Twitter: https://twitter.com/jazzystats (https://twitter.com/jazzystats) Lauren on GitHub: https://github.com/lauken13 (https://github.com/lauken13) Improving multilevel regression and poststratification with structured priors: https://arxiv.org/abs/1908.06716 (https://arxiv.org/abs/1908.06716) Using model-based regression and poststratification to generalize findings beyond the observed sample: https://arxiv.org/abs/1906.11323 (https://arxiv.org/abs/1906.11323) Lauren's beginners Bayes workshop: https://github.com/lauken13/Beginners_Bayes_Workshop (https://github.com/lauken13/Beginners_Bayes_Workshop) MRP in RStanarm: https://github.com/lauken13/rstanarm/blob/master/vignettes/mrp.Rmd (https://github.com/lauken13/rstanarm/blob/master/vignettes/mrp.Rmd) Choosing your rstanarm prior with prior predictive checks: https://github.com/stan-dev/rstanarm/blob/vignette-prior-predictive/vignettes/prior-pred.Rmd (https://github.com/stan-dev/rstanarm/blob/vignette-prior-predictive/vignettes/prior-pred.Rmd) Mister P -- What’s its secret sauce?: https://statmodeling.stat.columbia.edu/2013/10/09/mister-p-whats-its-secret-sauce/ (https://statmodeling.stat.columbia.edu/2013/10/09/mister-p-whats-its-secret-sauce/) Bayesian Multilevel Estimation with Poststratification -- State-Level Estimates from National Polls: https://pdfs.semanticscholar.org/2008/bee9f8c2d7e41ac9c5c54489f41989a0d7ba.pdf... Support this podcast
1 hr 13 min
Machine Learning Street Talk
Machine Learning Street Talk
Machine Learning Street Talk
#045 Microsoft's Platform for Reinforcement Learning (Bonsai)
Microsoft has an interesting strategy with their new “autonomous systems” technology also known as Project Bonsai. They want to create an interface to abstract away the complexity and esoterica of deep reinforcement learning. They want to fuse together expert knowledge and artificial intelligence all on one platform, so that complex problems can be decomposed into simpler ones. They want to take machine learning Ph.Ds out of the equation and make autonomous systems engineering look more like a traditional software engineering process. It is an ambitious undertaking, but interesting. Reinforcement learning is extremely difficult (as I cover in the video), and if you don’t have a team of RL Ph.Ds with tech industry experience, you shouldn’t even consider doing it yourself. This is our take on it! There are 3 chapters in this video; Chapter 1: Tim's intro and take on RL being hard, intro to Bonsai and machine teaching  Chapter 2: Interview with Scott Stanfield [recorded Jan 2020] 00:56:41 Chapter 3: Traditional street talk episode [recorded Dec 2020] 01:38:13 This is *not* an official communication from Microsoft, all personal opinions. There is no MS-confidential information in this video.  With: Scott Stanfield https://twitter.com/seesharp Megan Bloemsma https://twitter.com/BloemsmaMegan Gurdeep Pall (he has not validated anything we have said in this video or been involved in the creation of it) https://www.linkedin.com/in/gurdeep-pall-0aa639bb/ Panel:  Dr. Keith Duggar Dr. Tim Scarfe Yannic Kilcher
2 hr 30 min
Towards Data Science
Towards Data Science
The TDS team
72. Margot Gerritsen - Does AI have to be understandable to be ethical?
As AI systems have become more ubiquitous, people have begun to pay more attention to their ethical implications. Those implications are potentially enormous: Google’s search algorithm and Twitter’s recommendation system each have the ability to meaningfully sway public opinion on just about any issue. As a result, Google and Twitter’s choices have an outsized impact — not only on their immediate user base, but on society in general. That kind of power comes with risk of intentional misuse (for example, Twitter might choose to boost tweets that express views aligned with their preferred policies). But while intentional misuse is an important issue, equally challenging is the problem of avoiding unintentionally bad outputs from AI systems. Unintentionally bad AIs can lead to various biases that make algorithms perform better for some people than for others, or more generally to systems that are optimizing for things we actually don’t want in the long run. For example, platforms like Twitter and YouTube have played an important role in the increasing polarization of their US (and worldwide) user bases. They never intended to do this, of course, but their effect on social cohesion is arguably the result of internal cultures based on narrow metric optimization: when you optimize for short-term engagement, you often sacrifice long-term user well-being. The unintended consequences of AI systems are hard to predict, almost by definition. But their potential impact makes them very much worth thinking and talking about — which is why I sat down with Stanford professor, co-director of the Women in Data Science (WiDS) initiative, and host of the WiDS podcast Margot Gerritsen for this episode of the podcast.
1 hr 22 min
The Artists of Data Science
The Artists of Data Science
Harpreet Sahota
Data Science Happy Hour 21 | 26FEB2021
The Data Science Happy Hours keep getting happier! Check it out and don't forget to register for future office hours: http://bit.ly/adsoh Register for Sunday Sessions here: http://bit.ly/comet-ml-oh If you want to interact with me multiple times a week, join Data Science Dream Job for 70% off: http://dsdj.co/artists70 Watch the episode on YouTube here: https://www.youtube.com/playlist?list=PLx-pFwty92wJoWzoO7WlfaM7iYB8qjm [00:00:09] We kick it off with a practice presentation and then questions from the audience. This is an excellent learning experience for everyone! [00:13:30] Audience questions start here. [00:26:04] Tribe member Eric Sims shares some awesome news with us [00:27:15] We learn a lot about cloud technologies through the lens of a web scraping project [00:38:50] Can a business person manage a fully developed Data science team? And what are the skills required for that? [00:41:30] In data science, there are two types of leadership [00:43:38] What’s the difference between strategic leadership and technical leadership? [00:49:13] Data science leadership at the executive level vs team lead level [00:58:55] Question about an NLP project [01:04:35] Product management, metrics, and KPIs [01:12:28] Now what foundation does it take to break into engineering from Data science besides technical skills, what are their skills are needed to survive in engineering. [01:23:13] How to “cold call” and network on LinkedIn Special Guests: Greg Coquillo and Vin Vashishta.
1 hr 32 min
DeepMind: The podcast
DeepMind: The podcast
DeepMind: The podcast
8: Demis Hassabis: The interview
In this special extended episode, Hannah Fry meets Demis Hassabis, the CEO and co-founder of DeepMind. She digs into his former life as a chess player, games designer and neuroscientist and explores how his love of chess helped him to get start-up funding, what drives him and his vision, and why AI keeps him up at night. If you have a question or feedback on the series, message us on Twitter (@DeepMindAI (https://twitter.com/deepmindai?lang=en) using the hashtag #DMpodcast) or emailing us at podcast@deepmind.com (mailto:podcast@deepmind.com) . Further reading: Wired: Inside DeepMind's epic mission to solve science's trickiest problem (https://www.wired.co.uk/article/deepmind-protein-folding) Quanta magazine: How Artificial Intelligence Is Changing Science (https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/) Demis Hassabis: A systems neuroscience approach to building AGI. Talk at the 2010 Singularity Summit (https://www.youtube.com/watch?v=Qgd3OK5DZWI) Demis Hassabis: The power of self-learning systems. Talk at MIT 2019 (https://cbmm.mit.edu/video/power-self-learning-systems) Demis Hassabis: Talk on Creativity and AI (https://www.youtube.com/watch?v=d-bvsJWmqlc) Financial Times: The mind in the machine: Demis Hassabis on artificial intelligence (2017) (https://www.ft.com/content/048f418c-2487-11e7-a34a-538b4cb30025) The Times: Interview with Demis Hassabis (https://www.thetimes.co.uk/article/demis-hassabis-interview-the-brains-behind-deepmind-on-the-future-of-artificial-intelligence-mzk0zhsp8) The Economist Babbage podcast: DeepMind Games (https://play.acast.com/s/theeconomistbabbage/99af5224-b955-4a3c-930c-91a68bfe6c88?autoplay=true) Interview with Demis Hassabis (https://storage.googleapis.com/deepmind-media/podcast/Game%20Changer%20-%20Demis%20Hassabis%20Interview.pdf) from the book Game Changer (https://www.newinchess.com/game-changer) , which also features an introduction from Demis Interviewees: Deepmind CEO and co-founder, Demis Hassabis Credits: Presenter: Hannah Fry Editor: David Prest Senior Producer: Louisa Field Producers: Amy Racs, Dan Hardoon Binaural Sound: Lucinda Mason-Brown Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet) Commissioned by DeepMind
37 min
Search
Clear search
Close search
Google apps
Main menu