Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

workspace chart won't mount JWT secret #246

Merged
merged 1 commit into from
Nov 27, 2024
Merged

Conversation

ImpSy
Copy link
Collaborator

@ImpSy ImpSy commented Nov 21, 2024

Jira ticket

https://spotinst.atlassian.net/browse/BGD-6183

Description

Do not mount the token as ENV_VAR JUPYTER_GATEWAY_AUTH_TOKEN from the secret created by the bigdata-proxy

Demo

see Test plan and results

Checklist

  • I have added a Jira ticket link
  • I have filled in the test plan
  • I have executed the tests and filled in the test results
  • I have updated/created relevant documentation

How to test

Run the chart on demo-0 and start a notebook
Since the secret is not mounted it means that you are using the browser JWT token

Test plan and results

Name:             wksp-62236-ddc1997e-5d96fd46dc-rd7rp
Namespace:        spot-system
Priority:         0
Service Account:  default
Node:             ip-192-168-121-103.us-west-2.compute.internal/192.168.121.103
Start Time:       Fri, 22 Nov 2024 09:22:22 +0100
Labels:           app.kubernetes.io/instance=wksp-62236-ddc1997e
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=bigdata-notebook-workspace
                  app.kubernetes.io/version=4.1.8-ofas-31799c9
                  bigdata.spot.io/component=bigdata-notebook-workspace
                  bigdata.spot.io/creatorUserId=62236
                  helm.sh/chart=bigdata-notebook-workspace-0.0.17
                  pod-template-hash=5d96fd46dc
Annotations:      bigdata.spot.io/notebookWorkspaceName: seb
Status:           Running
IP:               192.168.110.96
IPs:
  IP:           192.168.110.96
Controlled By:  ReplicaSet/wksp-62236-ddc1997e-5d96fd46dc
Containers:
  workspace:
    Container ID:  containerd://5472dd584e1cd81f387ba05bc7f8e9fcb2c357958b945102b69fec5d537140ac
    Image:         public.ecr.aws/ocean-spark/bigdata-notebook:lab-4.1.8-ofas-31799c9
    Image ID:      public.ecr.aws/ocean-spark/bigdata-notebook@sha256:46baad2ee11eba13404ca5e37247acf836a5ab15257db42e641d76b0d07317e9
    Port:          8888/TCP
    Host Port:     0/TCP
    Command:
      start-notebook.py
    Args:
      --ServerApp.allow_origin=*
      --ServerApp.base_url=/api/ocean/spark/cluster/osc-fbf5c7dd/workspace/wksp-62236-ddc1997e/proxy
      --ServerApp.ip=0.0.0.0
      --ServerApp.token=
      --GatewayWebSocketConnection.kernel_ws_protocol=''
      --GatewayClient.gateway_token_renewer_class=jupyter_server.gateway.spottokenrenewer.SpotTokenRenewer
    State:          Running
      Started:      Fri, 22 Nov 2024 09:22:45 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  1000Mi
    Requests:
      cpu:     250m
      memory:  500Mi
    Environment:
      CHOWN_HOME:                       yes
      CHOWN_HOME_OPTS:                  -R
      JUPYTER_GATEWAY_URL:              http://bigdata-notebook-service.spot-system.svc.cluster.local
      JUPYTER_GATEWAY_REQUEST_TIMEOUT:  600
      JUPYTER_GATEWAY_HEADERS:          {"Content-Type": "application/json"}
    Mounts:
      /home/jovyan from workspace-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n2r52 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  workspace-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  wksp-62236-ddc1997e
    ReadOnly:   false
  kube-api-access-n2r52:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 bigdata.spot.io/unschedulable=ocean-spark-system:NoSchedule
                             kubernetes.azure.com/scalesetpriority=spot:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                   From                     Message
  ----     ------                  ----                  ----                     -------
  Warning  FailedScheduling        10m                   default-scheduler        0/8 nodes are available: 1 Insufficient memory, 2 Too many pods, 2 node(s) had untolerated taint {bigdata.spot.io/unschedulable: ocean-spark}, 3 Insufficient cpu. preemption: 0/8 nodes are available: 2 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
  Warning  FailedScheduling        7m8s (x2 over 8m40s)  default-scheduler        0/8 nodes are available: 1 Insufficient memory, 2 Too many pods, 2 node(s) had untolerated taint {bigdata.spot.io/unschedulable: ocean-spark}, 3 Insufficient cpu. preemption: 0/8 nodes are available: 2 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
  Warning  FailedScheduling        7m                    default-scheduler        0/9 nodes are available: 1 Insufficient memory, 1 node(s) had volume node affinity conflict, 2 Too many pods, 2 node(s) had untolerated taint {bigdata.spot.io/unschedulable: ocean-spark}, 3 Insufficient cpu. preemption: 0/9 nodes are available: 3 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
  Warning  FailedScheduling        6m52s                 default-scheduler        0/9 nodes are available: 1 Insufficient memory, 1 node(s) had volume node affinity conflict, 2 Too many pods, 2 node(s) had untolerated taint {bigdata.spot.io/unschedulable: ocean-spark}, 3 Insufficient cpu. preemption: 0/9 nodes are available: 3 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
  Normal   Scheduled               6m41s                 default-scheduler        Successfully assigned spot-system/wksp-62236-ddc1997e-5d96fd46dc-rd7rp to ip-192-168-121-103.us-west-2.compute.internal
  Normal   SuccessfulAttachVolume  6m40s                 attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-4d326f3f-f696-4952-a26a-16b6dfe8480b"
  Normal   Pulling                 6m37s                 kubelet                  Pulling image "public.ecr.aws/ocean-spark/bigdata-notebook:lab-4.1.8-ofas-31799c9"
  Normal   Pulled                  6m19s                 kubelet                  Successfully pulled image "public.ecr.aws/ocean-spark/bigdata-notebook:lab-4.1.8-ofas-31799c9" in 18.721s (18.721s including waiting). Image size: 456707704 bytes.
  Normal   Created                 6m18s                 kubelet                  Created container workspace
  Normal   Started                 6m18s                 kubelet                  Started container workspace

https://console.spotinst.com/ocean/spark/apps/clusters/osc-fbf5c7dd/apps/nb-84afca71-2d7a-4120-9979-4597fc7702de-a9cbf/overview/cpu

@ImpSy ImpSy requested a review from a team as a code owner November 21, 2024 19:52
@ImpSy ImpSy force-pushed the BGD-6183_dont_mount_jwt_secret branch from 8173a63 to ba95156 Compare November 27, 2024 13:51
@ImpSy ImpSy merged commit 9746b00 into main Nov 27, 2024
3 checks passed
@ImpSy ImpSy deleted the BGD-6183_dont_mount_jwt_secret branch November 27, 2024 14:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants