2020 Year End
Play • 34 min

Welcome to the year-end episode. Today is all the bonus questions. Often times I have questions that I want to ask guests, but they don't quite fit the overall theme of the episode. So today we're going to do a whole episode of those extra questions.

I have previously recorded questions for Brian Kernaghan, the creator of AWK among many other things. I have questions for Sean Allen, who works at Microsoft Research, and a couple of other people.

Episode Page:

http://corecursive.com/060-2020-year-end

Slack Channel:

https://rebrand.ly/corec_slack

Twitter:

https://twitter.com/adamgordonbell

 

The .NET Core Podcast
The .NET Core Podcast
Jamie Taylor
Picking the Right Azure Resources with Barry Luijbregts
Support for this episode comes from RJJ Software Ltd RJJ Software is dedicated to helping you to realise your company's digital potential through innovative solutions using the latest technologies. Remember: you can also always follow the show on twitter @dotnetcoreshow, and the shows host on twitter @podcasterJay In this episode of the .NET Core Podcast we chatted with Barry Luijbregts (aka Azure Barry) about the many different Azure resources and how to pick the "best" ones for your project The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at https://dotnetcore.show/episode-70-picking-the-right-azure-resources-with-barry-luijbregts/ Support for this episode also comes from Datadog. Head over to datadoghq.com/dotnetcore, sign up for a 14-day trial, and claim a free t-shirt! Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend. You can support the show by making a monthly donation one the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast The .NET Core Podcast is a proud member of Jay and Jay Media. If you like this episode, please consider supporting our Podcasting Network. One $3 donation provides a week of hosting for all of our shows. You can support this show, and the others like it, at https://ko-fi.com/jayandjaymedia
1 hr 18 min
Python Bytes
Python Bytes
Michael Kennedy and Brian Okken
#222 Autocomplete with type annotations for AWS and boto3
Sponsored by Linode! pythonbytes.fm/linode Special guest: Greg Herrera YouTube live stream for viewers: Watch on YouTube Michael #1: boto type annotations * via Michael Lerner * boto3's services are created at runtime * IDEs aren't able to index its code in order to provide code completion or infer the type of these services or of the objects created by them. * Type systems cannot verify them * Even if it was able to do so, clients and service resources are created using a service agnostic factory method and are only identified by a string argument of that method. * boto3_type_annotations defines stand in classes for the clients, service resources, paginators, and waiters provided by boto3's services. Example with “bare” boto3: Example with annotated boto3: Brian #2: How to have your code reviewer appreciate you * By Michael Lynch * Suggested by Miłosz Bednarzak * Actual title “How to Make Your Code Reviewer Fall in Love with You” * but 🤮 * even has the words “your reviewer will literally fall in love with you.” * literally → figuratively, please * Topic is important though, here are some good tips: * Review your own code first * “Don’t just check for mistakes — imagine reading the code for the first time. What might confuse you?” * Write a clear change list description * “A good change list description explains what the change achieves, at a high level, and why you’re making this change.” * Narrowly scope changes * Separate functional and non-functional changes * This is tough, even for me, but important. * Need to fix something, and the formatting is a nightmare and you feel you must blacken it. Do those things in two separate merge requests. * Break up large change lists * A ton to write about. Maybe it deserves 2-3 merges instead of 1. * Respond graciously to critiques * It can feel like a personal attack, but hopefully it’s not. * Responding defensively will only make things works. Greg #3: REPODASH - Quality Metrics for Github repositories * by Laurence Molloy * Do you maintain a project codebase on Github? * Would you like to be able to show the maturity of your project at a glance? * Walk through the metrics available * Use-case Michael #4: Extra, extra, extra, extra, hear all about it * Python 3 Float Security Bug * Building Python 3 from source now :-/ It’s still Python 3.8.5 on Ubuntu with the kernel patch just today! (Linux 5.4.0-66 / Ubuntu 20.04.2) * Finally, I’m Dockering on my M1 mac via: * docker context create remotedocker --docker "host=ssh://user@server" * docker context use remotedocker * docker run -it ubuntu:latest bash now works as usual but remotely! * Why I keep complaining about merge thing on dependabot. Why!?! ;) * Anthony Shaw wrote a bot to help alleviate this a bit. More on that later. Brian #5: testcontainers-python * Suggested by Josh Peak * Why mock a database? Spin up a live one in a docker container. * “Python port for testcontainers-java that allows using docker containers for functional and integration testing. Testcontainers-python provides capabilities to spin up docker containers (such as a database, Selenium web browser, or any other container) for testing.” import sqlalchemy from testcontainers.mysql import MySqlContainer with MySqlContainer('mysql:5.7.32') as mysql: engine = sqlalchemy.create_engine(mysql.get_connection_url()) version, = engine.execute("select version()").fetchone() print(version) # 5.7.32 * The snippet above will spin up a MySql database in a container. The get_connection_url() convenience method returns a sqlalchemy compatible url we use to connect to the database and retrieve the database version. Greg #6: The Python Ecosystem is relentlessly improving price-performance every day * Python is reaching top-of-mind for more and more business decision-makers because their technology teams are delivering solutions to the business with unprecedented price-performance. * The business impact keeps getting better and better. * What seems like heavy adoption throughout the economy is still a relatively small-inroad compared to what we’ll see in the future. It’s like water rapidly collecting behind a weak dam. * It’s an exciting time to be in the Python world! Extras: Brian: * Firefox 86 enhances cookie protection * sites can save cookies. but can’t share between sites. * Firefox maintains separate cookie storage for each site. * Momentary exceptions allowed for some non-tracking cross-site cookie uses, such as popular third party login providers. Joke: 56 Funny Code Comments That People Actually Wrote: These are actually in a code base somewhere (a sampling): /* * Dear Maintainer * * Once you are done trying to ‘optimize’ this routine, * and you have realized what a terrible mistake that was, * please increment the following counter as a warning * to the next guy. * * total_hours_wasted_here = 73 */ // sometimes I believe compiler ignores all my comments // drunk, fix later // Magic. Do not touch. /*** Always returns true ***/ public boolean isAvailable() { return false; }
38 min
Streaming Audio: A Confluent podcast about Apache Kafka
Streaming Audio: A Confluent podcast about Apache Kafka
Confluent, original creators of Apache Kafka®
Becoming Data Driven with Apache Kafka and Stream Processing ft. Daniel Jagielski
When it comes to adopting event-driven architectures, a couple of key considerations often arise: the way that an asynchronous core interacts with external synchronous systems and the question of “how do I refactor my monolith into services?” Daniel Jagielski, a consultant working as a tech lead/dev manager at VirtusLab for Tesco, recounts how these very themes emerged in his work with European clients.  Through observing organizations as they pivot toward becoming real time and event driven, Daniel identifies the benefits of using Apache Kafka® and stream processing for auditing, integration, pub/sub, and event streaming. He describes the differences between a provisioned cluster vs. managed cluster and the importance of this within the Kafka ecosystem. Daniel also dives into the risk detection platform used by Tesco, which he helped build as a VirtusLab consultant and that marries the asynchronous and synchronous worlds. As Tesco migrated from a legacy platform to event streaming, determining risk and anomaly detection patterns have become more important than ever. They need the flexibility to adjust due to changing usage patterns with COVID-19. In this episode, Daniel talks integrations with third parties, push-based actions, and materialized views/projects for APIs. Daniel is a tech lead/dev manager, but he’s also an individual contributor for the Apollo project (an ICE organization) focused on online music usage processing. This means working with data in motion; breaking the monolith (starting with a proof of concept); ETL migration to stream processing, and ingestion via multiple processes that run in parallel with record-level processing. EPISODE LINKS * Building an Apache Kafka Center of Excellence Within Your Organization ft. Neil Buesing  * Risk Management in Retail with Stream Processing * Event Sourcing, Stream Processing and Serverless * It’s Time for Streaming to Have a Maturity Model ft. Nick Dearden * Read Daniel Jagielski's articles on the Confluent blog * Join the Confluent Community * Learn more with Kafka tutorials, resources, and guides at Confluent Developer * Live demo: Kafka streaming in 10 minutes on Confluent Cloud * Use *60PDCAST* to get an additional $60 of free Confluent Cloud usage (details)
48 min
More episodes
Search
Clear search
Close search
Google apps
Main menu