Anyone doing carbon aware scheduling with Kubernetes? #36
Replies: 4 comments 7 replies
-
We plan an operator for time shifting. I think the keda operator is more suitable for demand shaping (do less and do more). |
Beta Was this translation helpful? Give feedback.
-
Hi @AydinMirMohammadi, that's great to hear you're also looking at this area. Good point the carbon-aware-keda-operator is more doing demand shaping by setting the max replicas depending on the carbon intensity. This is because the number of replicas is still controlled by the keda scaler being used. For a time shifting operator have you seen the exporter the keda operator uses? It fetches the carbon intensity forecast using the carbon-aware-sdk from the GSF and saves it to a configmap that can be read by other components. For location shifting you're right the karmada solution I've been working on just considers scope 2 emissions from the electricity grid. The scope 3 emissions especially the embodied carbon in the servers need to be considered but it's hard to get data for this. I hope the real time carbon metrics project will help with this https://github.com/orgs/Green-Software-Foundation/discussions/34 |
Beta Was this translation helpful? Give feedback.
-
Hey, sorry I am late to the party 😅 I just wanted to add that I am starting my Masters Thesis project this Monday (28 Aug '23), running to end of year, with the aim of creating a version of Karpenter that can perform carbon aware autoscaling of Kubernetes clusters. I don't really have much to show yet, and the first weeks will probably just be gathering knowledge. But I am very happy to see that you folks also find this topic interesting. I am excited to get started, and the work by you, and other people behind the Green Software Foundation has already been, and continues to be, tremendously helpful. 💚 |
Beta Was this translation helpful? Give feedback.
-
An update on the carbon-aware-karmada-operator. I demoed the prototype I developed at the Karmada community meeting a couple of weeks ago. https://www.youtube.com/watch?v=-Y0SDbSKHJk I got some great feedback including that it would be useful to support weights when scheduling across clusters. This issue was created as a result karmada-io/karmada#3917 I think a good next step would be to see if there are members of the Karmada community and / or the community here that would like to collaborate on this. Please do chime in if you're interested 🙏 and I'll keep a 🧵 here updated. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I'm interested in making software carbon aware. There are many ways of doing this but Kubernetes is widely used, provides APIs for scheduling and can be extended using CRDs (Custom Resource Definitions) and custom controllers.
For temporal shifting (moving workloads to times when carbon intensity is lower) there is the carbon-aware-keda-operator developed by Microsoft which builds on top of the KEDA (Kubernetes Event Driven Autoscaling) project. It can do autoscaling based on many types of metrics such as SQS or Kafka queue length.
I've been looking at spatial shifting (moving workloads to physical locations where carbon intensity is lower). I've created a proof of concept carbon-aware-karmada-operator that extends the Karmada project which can do multi-cluster and multi-cloud scheduling.
Are there other projects out there doing this? Are you already doing carbon aware scheduling in your clusters?
I'm also interested to hear what this community thinks to the current efforts? As well as suggestions on how they can be improved and how we can increase adoption. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions