Oct 11, 2023
Operating ML and GenAI at scale: the latest on Kubernetes, Karpenter and Bedrock
In Episode 8, weβre joined by AWS Partner Solutions Architects Andrew Park and Mike McDonald to discuss the complexities and cost of running todayβs ML and AI workloads on the cloud.
From anecdotes of the bad old days before container orchestration, our panelists take you to the present challenge of how to simplify efficient infrastructure operation β with the aim of freeing up Data Scientists and Engineers to focus on building and innovating.
Our panelists discuss the merits, pitfalls, and potential of various cost-optimizing tools and approaches (Ray, Karpenter, Spot, timeslicing) β key to addressing the demand for the expensive computing power generated by ML and AI models at scale.
Watch the full episode for:
* The lowdown on AWS Bedrock and where it fits into the current stack of the latest AWS ML and AI offerings β how it works, use cases, the access it grants to new generative AI models
* How Karpenter can make your life easy and save you SO much money (especially if you set-it-and-forget-it with nKS)
* And hot takes on the controversial question: is ECS dead?!