The Cloudcast
The Cloudcast
Nov 25, 2020
Great Data Models Need Great Features
Play • 35 min

Mike Del Balso (@mikedelbalso, CEO at @TectonAI) talks about lessons learned from Uber’s Michelangelo ML platform, enabling DevOps for ML data, and how Tecton enables features for data models.  

SHOW: 477

SHOW SPONSOR LINKS:

CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw

PodCTL Podcast is Back (Enterprise Kubernetes) - http://podctl.com

SHOW NOTES:

Topic 1 - Welcome to the show. It’s always exciting to talk to new companies. You were doing some pretty interesting things at Uber prior to starting Tecton, so tell us a little bit about that experience and then what motivated you to start Tecton? 

Topic 2 - There are lots of Data/AI/ML tools and platforms out there. Tecton talks about “great models need great features”. Give us a high-level overview of the Tecton platform and the perspective you bring to solving complex business problems.

Topic 3 - After reading the papers on the Uber Michelangelo platform, it’s clear that today’s interactions aren’t a bunch of individual “decisions”, but layers of decisions made on ever-changing data (the UberEATS example). Why does business need a new approach to how they interact with data? 

Topic 4 - When I think about earlier approaches for companies to “harness data for analytics”, there was always the problem of data silos. Do you find that companies need to organize themselves different, not just organize their data, to be able to overcome those silo challenges? Does it take a much more product-centric approach vs. the traditional “analyst” approach?

Topic 5 - Every new company and platform needs to find product-market fit. What do you see as early “fits” for the Tecton platform? 

Topic 6 - How much data-science expertise does a company need today to be able to leverage Tecton, and how much does the platform lower the barrier to entry? 

FEEDBACK?

Kubernetes Podcast from Google
Kubernetes Podcast from Google
Adam Glick and Craig Box
CNCF and the Linux Foundation, with Chris Aniszcyzk
After building the Eclipse IDE and Twitter’s Open Source office, Chris Aniszcyzk bootstrapped the CNCF, joining its parent the Linux Foundation in 2015. He’s now a VP of DevRel there, as well as CTO at the CNCF and Executive Director of the Open Container Initiative. Chris joins us to share his technology journey and Cloud Native predictions for 2021. And all that is now And all that is gone And all that’s to come And everything under the sun is in tune But the sun is eclipsed by the moon Do you have something cool to share? Some questions? Let us know: * web: kubernetespodcast.com * mail: kubernetespodcast@google.com * twitter: @kubernetespod Chatter of the week * Adam on LinkedIn News of the week * Otomi from RedKubes * Nutanix now supports Anthos * Tanzu Advanced is GA * Pivotal Labs is Tanzu Labs * VMware needs a new CEO * New CSI driver for Google Kubernetes Engine * Slim.ai announces seed funding * Grafana Cloud introduces free tier * Sysdig container security usage report (PDF) * 63 node Kubernetes cluster using Firecracker by Álvaro Hernández * The definitive guide to Vertical Pod Autoscaling by Povilas Versockas Links from the interview * ZX Spectrum * R-Type and Jet Pac * GORILLA.BAS * Gentoo Linux * Java Virtual Machine (JVM) * Eclipse * Object Technology International * Erich Gamma * code9, Chris’s startup * Backstage and Roadie * Twitter OSS * Pants * Mesos * twemproxy * Linux Foundation, and its sub-projects CNCF and OCI * Services for projects * Linus Torvalds and Greg Kroah-Hartman * Chris’s Cloud Native predictions for 2021 * Developer experience: Gitpod, GitHub Codespaces or Google Cloud Shell * Wasm in Envoy * Wasi, the WebAssembly Systems Interface * Chris Aniszcyzk on Twitter and on the web * Canada Revenue Agency on Twitter
39 min
AWS TechChat
AWS TechChat
Shane Baldacchino
Episode 79 - re:Invent 2020 - App Dev, Containers & Database Wrap
In this episode of AWS TechChat we continue with part two of our four part re:Invent 2020 series with this episode covering all Application Development, Containers, and Database announcements. For our developer community, we talked about: * Using CodeGuru’s new Security detectors to help you find and remediate security issues in your code * Python support for CodeGuru’s in preview * We shared another new service, DevOps Guru in preview, for measuring and improving an application’s operational performance * Lambda now supports up to 10 GB of memory and 6 vCPU cores and a billing granularity reduction down to 1ms * Amazon API Gateway now supports integration with Step Functions StartSyncExecution for HTTP APIs * Appflow simplifies cloud app integrations for connect customers with Customer Profiles * Similarly, Appflow can provide similar app integrations with those 3rd party apps to HoneyCode. * For those Amplify users, deploy Fargate containers through the Amplify CLI and you get a new AdminUI to boot that deploys all the underlying bits for you. * AWS Proton to bridge the gap between platform and development teams In containers we kicked it off with EKS. * First, cluster add-ons managed through the EKS console, CLI, or API. * Run EKS on premises with EKS Distribution * EKS on Fargate now has built in logging with Fluent Bit under the hood * You can now see all your Kubernetes resources in the EKS console without needing extra tools * Public registries for your container images with ECR public and the ECR public gallery * Use your existing containers as a lambda package format * ECS Deployment Circuit Breaker is in preview to stop deployments from getting worse and auto-rollback In database land we covered * Bablefish, not the mythological creature, but a translation layer between Aurora PostgresSQL and Microsoft SQL. * v2 of Aurora Serverless has arrived, considerably faster and scales in a fraction of second, with scaling so fast it is perfect for those event driven applications. * Data Exchange adds revision access rules for governing access * RDS Service Delivery Partners for when you want someone to build, deploy, and manage your RDS deployments * RDS Cross-Region backups comes to RDS for Oracle * Share data across Redshift clusters with data sharing in preview and pull data from partners directly via the RedShift Console. * RedShift Federated query comes to RDS for MySQL and Aurora MySQL * Redshift Automatic Table Optimization to keep your data warehouse running in tip top shape automatically. * Move RedShift clusters easily across Availability Zones. * JSON supports in preview for RedShift * Finally, AQUA comes to RedShift in Preview as a caching layer to speed up queries. Stay tuned as we cover all aspects of re:invent 2020 in our coming multi-part re:Invent update
52 min
Streaming Audio: A Confluent podcast about Apache Kafka
Streaming Audio: A Confluent podcast about Apache Kafka
Confluent, original creators of Apache Kafka®
Scaling Developer Productivity with Apache Kafka ft. Mohinish Shaikh
Confluent Cloud and Confluent Platform run efficiently largely because of the dedication of the Developer Productivity (DevProd) team, formerly known as the Tools team. Mohinish Shaikh (Software Engineer, Confluent) talks to Tim Berglund about how his team builds the software tooling and automation for the entire event streaming platform and ensures seamless delivery of several engineering processes across engineering and the rest of the org. With the right tools and the right data, developer productivity can understand the overall effectiveness of a development team and their ability to produce results. The DevProd team helps engineering teams at Confluent ship code from commit to end customers actively using Apache Kafka®. This team proficiently understands a wide scope of polyglot applications and also the complexities of using a diverse technology stack on a regular basis to help solve business-critical problems for the engineering org.  The team actively measures how each system interacts with one another and what programs are needed to properly run the code in various environments to help with the release of reliable artifacts for Confluent Cloud and Confluent Platform. An in-depth understanding of the entire framework and development workflow is essential for organizations to deliver software reliably, on time, and within their cost budget. The DevProd team provides that second line of defense and reliability before the code is released to end customers. As the need for compliance increases and the event streaming platform continues to evolve, the DevProd team is in place to make sure that all of the final touches are completed.  EPISODE LINKS * Leveraging Microservices and Apache Kafka to Scale Developer Productivity * Join the Confluent Community Slack * Learn more with Kafka tutorials, resources, and guides at Confluent Developer * Live demo: Kafka streaming in 10 minutes on Confluent Cloud * Use *60PDCAST* to get an additional $60 of free Confluent Cloud usage (details)
34 min
More episodes
Search
Clear search
Close search
Google apps
Main menu