Skip to content

Commit

Permalink
Deploying to gh-pages from @ 5a0d3b9 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
RachelHooley committed Dec 25, 2024
1 parent ca3e310 commit ffa0e8d
Showing 1 changed file with 6 additions and 74 deletions.
80 changes: 6 additions & 74 deletions kb/_faqs-for-review.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,17 +87,17 @@ See [What is Amazon Elastic Container Service](https://docs.aws.amazon.com/Amazo

You can use taints, tolerations, and node selector to make sure that only pods with the on-demand lifecycle label are scheduled on on-demand nodes. Pods that don't have this label cannot be scheduled on these nodes. Taints and tolerations work together to make sure pods are scheduled on the right nodes.

1. Make sure your [pod has the tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) set to:
1. Make sure your [pod has the tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) set to:

<pre><code>tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule"</code></pre>

> **Note**: If the <b>operator</b> is <i>Exists</i>, the LS <font color="#FC01CC">what's LS?? launch specification - is this a virtual node group??</font> needs to be <i>null</i>.
> **Note**: If the <b>operator</b> is <i>Exists</i>, the launch specification needs to be <i>null</i>.
2. Configure a [node selector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) with the on-demand lifecycle label (<code>spotinst.io/node-lifecycle: od</code>).<font color="#FC01CC">where do they do this?? is this link useful?</font>
2. Configure a [node selector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) with the on-demand lifecycle label (<code>spotinst.io/node-lifecycle: od</code>).<font color="#FC01CC">where do they do this?? is this link correct?</font>

<details>
<summary markdown="span">Sample deployment with node selector set to <i>od</i></summary>
Expand Down Expand Up @@ -134,79 +134,11 @@ See [What is Amazon Elastic Container Service](https://docs.aws.amazon.com/Amazo
</details>


<font color="#FC01CC">On the pod side -
3. In the Spot console, [configure Ocean custom launch specificatoins](ocean/tutorials/migrate-existing-egs-ekskops?id=step-2-configure-ocean-custom-launch-specifications).

You should configure the tolerations:
If there are several launch specifications configured in the cluster, you should add a custom label to the specific launch specification, as well as to the pod. The reason another custom label should be added is that only tolerations that configured on the pod, will not trigger a scale-up from the dedicated launch specification.

For example:

<pre><code>
tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule"
</code></pre>

Important note! If the operator is Exists - no value should be specified (in that case the value in the LS should be ‘null’).

You should configure a node selector with the od lifecycle label.

Here is an example of a deployment:

<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule"
nodeSelector:
spotinst.io/node-lifecycle: od
</code></pre>

</font>

3. In the Spot console, configure

<font color="#FC01CC">original

On the node side (Ocean configuration) -

You should configure a LS with a “matching” taints (a toleration "matches" a taint if the keys and the effects are the same).

For example:
####no example in the article.....####


Important note:

If there are several LS configured in the cluster, you should add a custom label to the specific LS, as well as to the pod. The reason another custom label should be added is that only tolerations that configured on the pod, will not trigger a scale-up from the dedicated LS.

Please note, in case a customer wishes to run only a specific workload on the nodes launched from the LS, simply adjust the node selector to the dedicated node selector of the workload.

For example, in case the customer uses LS for GPU instance and he wishes only pods with a dedicated node selector would run on the node, he should follow the same steps and simply adjust the node selector to the dedicated one.

</font>
If you want to run only a specific workload on the nodes launched from the launch specification, adjust the node selector to the dedicated node selector of the workload. For example, if you use launch specification for GPU instance and only want pods with a dedicated node selector to run on the node, adjust the node selector to the dedicated one.
</div>

</details>
Expand Down

0 comments on commit ffa0e8d

Please sign in to comment.