Skip to content

Commit

Permalink
add kubecon eu talk description and outline
Browse files Browse the repository at this point in the history
  • Loading branch information
moficodes committed Mar 31, 2024
1 parent f76d788 commit ae1c809
Showing 1 changed file with 13 additions and 0 deletions.
13 changes: 13 additions & 0 deletions content/talks/navigating-proecessing-unit.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
title: "Navigating the Processing Unit Landscape in Kubernetes for AI Use Cases"
date: 2024-03-01T12:33:22-04:00
draft: false
---

## Pitch

Explain the difference between CPU, GPU and TPU for running LLM and ML workloads. Why people should choose one over the other and where it makes sense to use which one.

## Description

With the emergence of LLMs (Large Language Models) and other Machine Learning (ML) workloads running on Kubernetes, gone are the days when just CPU is enough. Machine Learning and Artificial Intelligence workloads are best served by specialized processing units. While CPUs are great at doing work sequentially, Artificial Intelligence and Machine Learning require a different approach to processing information - a highly parallel one. In Kubernetes, that means GPUs (Graphical Processing Units) and TPUs (Tensor Processing Units). This talk gives you an introduction of what each type of processing unit is, what they are good at, and how to use them well in Kubernetes.

0 comments on commit ae1c809

Please sign in to comment.