Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm and external etcd init error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition #2612

Closed
vupv opened this issue Nov 25, 2021 · 42 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@vupv
Copy link

vupv commented Nov 25, 2021

Error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
init.log

root@master01:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

root@master01:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:47:19Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}

root@master01:# etcd version
{"level":"info","ts":1637853104.719803,"caller":"etcdmain/etcd.go:72","msg":"Running: ","args":["etcd","version"]}
{"level":"warn","ts":1637853104.719844,"caller":"etcdmain/etcd.go:74","msg":"failed to verify flags","error":"'version' is not a valid flag"}
root@master01:
#

I plan to setup with below boxes
master01 --> 192.168.1.85
master02 --> 192.168.1.86
haproxy01, keepalived --> 192.168.1.87 , VIP --> 192.168.1.88
worker01 --> 192.168.1.90
worker02 --> 192.168.1.91

Install docker, kubelet kubeadm kubectl on all cluster node

HAProxy setup
root@haproxy01:~# cat /etc/haproxy/haproxy.cfg
global
...
defaults
...
frontend k8s_frontend
bind 192.168.1.88:6443
option tcplog
mode tcp
default_backend k8s_backend

backend k8s_backend
mode tcp
balance roundrobin
option tcp-check
server master01 192.168.1.85:6443 check fall 3 rise 2
server master02 192.168.1.86:6443 check fall 3 rise 2

Keepalived setup

root@haproxy01:~# cat /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy { # Requires keepalived-1.1.13
script "killall -0 haproxy" # cheaper than pidof
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface enp0s3
state MASTER
virtual_router_id 51
priority 100 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.1.88 brd 192.168.1.255 dev enp0s3 label enp0s3:1
}
track_script {
chk_haproxy
}
}

Generating the TLS certificates

$ vim ca-config.json
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}

$ vim ca-csr.json
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IE",
"L": "Cork",
"O": "Kubernetes",
"OU": "CA",
"ST": "Cork Co."
}
]
}

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ vim kubernetes-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IE",
"L": "Cork",
"O": "Kubernetes",
"OU": "Kubernetes",
"ST": "Cork Co."
}
]
}

cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-hostname=192.168.1.85,192.168.1.86,192.168.1.87,192.168.1.88,192.168.1.89,127.0.0.1,kubernetes.default
-profile=kubernetes kubernetes-csr.json |
cfssljson -bare kubernetes

scp ca.pem kubernetes.pem kubernetes-key.pem root@192.168.1.85:/etc/etcd/
scp ca.pem kubernetes.pem kubernetes-key.pem root@192.168.1.86:/etc/etcd/

Etcd on two masters setup
root@master01:~# cat /etc/systemd/system/etcd.service

[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/local/bin/etcd
--name 192.168.1.85
--cert-file=/etc/etcd/kubernetes.pem
--key-file=/etc/etcd/kubernetes-key.pem
--peer-cert-file=/etc/etcd/kubernetes.pem
--peer-key-file=/etc/etcd/kubernetes-key.pem
--trusted-ca-file=/etc/etcd/ca.pem
--peer-trusted-ca-file=/etc/etcd/ca.pem
--peer-client-cert-auth
--client-cert-auth
--initial-advertise-peer-urls https://192.168.1.85:2380
--listen-peer-urls https://192.168.1.85:2380
--listen-client-urls https://192.168.1.85:2379,http://127.0.0.1:2379
--advertise-client-urls https://192.168.1.85:2379
--initial-cluster-token etcd-cluster-1
--initial-cluster 192.168.1.85=https://192.168.1.85:2380,192.168.1.86=https://192.168.1.86:2380
--initial-cluster-state new
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

root@master02:~# cat /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/local/bin/etcd
--name 192.168.1.86
--cert-file=/etc/etcd/kubernetes.pem
--key-file=/etc/etcd/kubernetes-key.pem
--peer-cert-file=/etc/etcd/kubernetes.pem
--peer-key-file=/etc/etcd/kubernetes-key.pem
--trusted-ca-file=/etc/etcd/ca.pem
--peer-trusted-ca-file=/etc/etcd/ca.pem
--peer-client-cert-auth
--client-cert-auth
--initial-advertise-peer-urls https://192.168.1.86:2380
--listen-peer-urls https://192.168.1.86:2380
--listen-client-urls https://192.168.1.86:2379,http://127.0.0.1:2379
--advertise-client-urls https://192.168.1.86:2379
--initial-cluster-token etcd-cluster-0
--initial-cluster 192.168.1.85=https://192.168.1.85:2380,192.168.1.86=https://192.168.1.86:2380
--initial-cluster-state new
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

root@master01:~# ETCDCTL_API=3 etcdctl member list
8a137d8e3cb900a, started, 192.168.1.86, https://192.168.1.86:2380, https://192.168.1.86:2379, false
75b9a29ae5a417ae, started, 192.168.1.85, https://192.168.1.85:2380, https://192.168.1.85:2379, false

Prepare cluster config file
root@master01:~# cat cluster.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
etcd:
external:
caFile: /etc/etcd/ca.pem
certFile: /etc/etcd/kubernetes.pem
keyFile: /etc/etcd/kubernetes-key.pem
endpoints:

  • https://192.168.1.85:2379
  • https://192.168.1.86:2379
    networking:
    dnsDomain: cluster.local
    podSubnet: 10.30.0.0/24
    serviceSubnet: 10.96.0.0/12
    kubernetesVersion: v1.22.4
    controlPlaneEndpoint: 192.168.1.88:6443
    apiServer:
    timeoutForControlPlane: 4m0s
    extraArgs:
    authorization-mode: "RBAC"
    certSANs:
  • "127.0.0.1"
  • "192.168.1.85"
  • "192.168.1.86"
  • "192.168.1.88"
  • ".fe.me"
    controllerManager: {}
    scheduler: {}
    certificatesDir: /etc/kubernetes/pki
    imageRepository: k8s.gcr.io
    clusterName: kubernetes
    dns: {}

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
root@master01:~#

root@master01:~# kubeadm init --config=cluster.yaml

@neolit123
Copy link
Member

I saw your comments earlier on another issue but i do not think you will get a better response on this new issue. Kubeadm as a client tries to get access the api server trough the lb but it fails. It could be a temp outage or a permanent problem.

To test if the lb works you can deploy a test Go compiled app that serves as a dummy api server and see if curl can reach it trough the lb.

This is not a kubeadm problem per se and the best place is to discuss on support channels.

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Nov 25, 2021
@vupv
Copy link
Author

vupv commented Nov 25, 2021

root@master01:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 20-etcd-service-manager.conf
Active: active (running) since Thu 2021-11-25 17:23:34 UTC; 2min 54s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 166036 (kubelet)
Tasks: 14 (limit: 2279)
Memory: 34.3M
CGroup: /system.slice/kubelet.service
└─166036 /usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd

Nov 25 17:23:36 master01.fe.me kubelet[166036]: I1125 17:23:36.507642 166036 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/h>
Nov 25 17:23:36 master01.fe.me kubelet[166036]: I1125 17:23:36.608214 166036 reconciler.go:157] "Reconciler: start to sync state"
Nov 25 17:23:39 master01.fe.me kubelet[166036]: E1125 17:23:39.908160 166036 kuberuntime_manager.go:1037] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: 9fc72d474cfc97708fd6f41>
Nov 25 17:23:39 master01.fe.me kubelet[166036]: E1125 17:23:39.909026 166036 kuberuntime_manager.go:1037] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: ce8b5f091a6d54d70e0e148>
Nov 25 17:23:39 master01.fe.me kubelet[166036]: E1125 17:23:39.909828 166036 kuberuntime_manager.go:1037] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: 3c1934b667cb1ba451a35da>
Nov 25 17:23:40 master01.fe.me kubelet[166036]: E1125 17:23:40.912543 166036 kuberuntime_manager.go:1037] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: 9fc72d474cfc97708fd6f41>
Nov 25 17:23:40 master01.fe.me kubelet[166036]: E1125 17:23:40.913267 166036 kuberuntime_manager.go:1037] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: ce8b5f091a6d54d70e0e148>
Nov 25 17:23:41 master01.fe.me kubelet[166036]: I1125 17:23:41.321996 166036 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3c1934b667cb1ba451a35dae26750c2bae0fabd953792c99a9d0b9dea0f66686"
Nov 25 17:23:46 master01.fe.me kubelet[166036]: I1125 17:23:46.774772 166036 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="9fc72d474cfc97708fd6f416e11dacc62eec22b97d26f3c322f0ff6febcf0ce0"
Nov 25 17:23:48 master01.fe.me kubelet[166036]: I1125 17:23:48.134955 166036 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ce8b5f091a6d54d70e0e1481238161eb4bf85dc2af81bd45e994029b0b497a77"
lines 1-22/22 (END)

@vupv
Copy link
Author

vupv commented Nov 25, 2021

I see some error in kubelet

@neolit123
Copy link
Member

when you are seeing the error about "writing Criscoket.." in kubeadm, is the apiserver container running?

@vupv
Copy link
Author

vupv commented Nov 25, 2021

Yes, it's running and still up

root@master01:# docker ps -a |grep apiserver
9d25517ba5fb 8a5cc299272d "kube-apiserver --ad…" 11 minutes ago Up 11 minutes k8s_kube-apiserver_kube-apiserver-master01.fe.me_kube-system_6672ea5f57bf7865bd16ff5381900e45_8
3c1934b667cb k8s.gcr.io/pause:3.5 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-apiserver-master01.fe.me_kube-system_6672ea5f57bf7865bd16ff5381900e45_0
root@master01:
#

@vupv
Copy link
Author

vupv commented Nov 25, 2021

docker.ps.log

@vupv
Copy link
Author

vupv commented Nov 25, 2021

LB is fine
curl https://192.168.1.88:6443/healthz --cacert /etc/kubernetes/pki/ca.crt
ok
root@master01:~#

@neolit123
Copy link
Member

neolit123 commented Nov 25, 2021

worth looking the apiserver logs.
maybe it cannot connect to the etcd your are setting as external and it cannot write the Node update?

if the component is running and the storage backend is working i don't see why kubeadm init would fail.
try also running kubeadm init with --v=10 and look what happens with the requests.

also is init failing consistently or only sometimes?

@neolit123
Copy link
Member

neolit123 commented Nov 25, 2021

if i understand the setup correctly you have two control plane nodes managed by kubeadm and each has a external but co-located etcd member managed by systemd on the kubeadm nodes. is that right?

having the co-located members on the same nodes as external is not advised unless you want to pass special config to etcd that kubeadm's local/stacked etcd does not support.

also you need 3 control plane nodes and 3 etcd members for HA:
https://etcd.io/docs/v3.3/faq/#what-is-failure-tolerance
2 etcd members cannot vote (in case of failure).

@vupv
Copy link
Author

vupv commented Nov 25, 2021

if i understand the setup correctly you have two control plane nodes managed by kubeadm and each has a external but co-located etcd member managed by systemd on the kubeadm nodes. is that right?
--> yes

@vupv
Copy link
Author

vupv commented Nov 25, 2021

kubeadm init --config=cluster.yaml --upload-certs --v=10

I1125 18:09:50.453292 181098 round_trippers.go:454] GET https://192.168.1.88:6443/api/v1/nodes/master01.fe.me?timeout=10s 404 Not Found in 4 milliseconds
I1125 18:09:50.453329 181098 round_trippers.go:460] Response Headers:
I1125 18:09:50.453334 181098 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: be93107f-43e7-4618-8bfb-dc3de2adc4d7
I1125 18:09:50.453338 181098 round_trippers.go:463] Content-Length: 198
I1125 18:09:50.453341 181098 round_trippers.go:463] Date: Thu, 25 Nov 2021 18:09:50 GMT
I1125 18:09:50.453345 181098 round_trippers.go:463] Audit-Id: a527e861-8234-4af6-b2f0-87a613a1a4cd
I1125 18:09:50.453399 181098 round_trippers.go:463] Cache-Control: no-cache, private
I1125 18:09:50.453409 181098 round_trippers.go:463] Content-Type: application/json
I1125 18:09:50.453413 181098 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: 16670d95-4b6b-42e3-8c15-3adf5a0ec585
I1125 18:09:50.453442 181098 request.go:1181] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes "master01.fe.me" not found","reason":"NotFound","details":{"name":"master01.fe.me","kind":"nodes"},"code":404}
timed out waiting for the condition
Error writing Crisocket information for the control-plane node
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runUploadKubeletConfig
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/uploadconfig.go:131
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
k8s.io/kubernetes/cmd/kubeadm/app.Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50

@vupv
Copy link
Author

vupv commented Nov 25, 2021

master01.fe.me is the first master node I am initing

@vupv
Copy link
Author

vupv commented Nov 25, 2021

2 etcd members cannot vote (in case of failure). --> I will try with 3 etcd

@vupv
Copy link
Author

vupv commented Nov 25, 2021

I1125 18:09:50.453442 181098 request.go:1181] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes "master01.fe.me" not found","reason":"NotFound","details":{"name":"master01.fe.me","kind":"nodes"},"code":404}

master01.fe.me NotFound, have any wrong

@neolit123
Copy link
Member

neolit123 commented Nov 25, 2021

2 etcd members cannot vote (in case of failure). --> I will try with 3 etcd

yes, because 2 members is not HA, if you want to have HA.

if i understand the setup correctly you have two control plane nodes managed by kubeadm and each has a external but co-located etcd member managed by systemd on the kubeadm nodes. is that right?
--> yes

ok, but why external etcd on the same nodes?

I1125 18:09:50.453442 181098 request.go:1181] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes "master01.fe.me" not found","reason":"NotFound","details":{"name":"master01.fe.me","kind":"nodes"},"code":404}
timed out waiting for the condition

the kubelet is responsible for creating the node object, so if it failed for some reason it would not be available in etcd and the kube-apiserver would not be able to perform actions on it such as patch. kubeadm requests to patch it....once kubeadm sees there is a /etc/kubernetes/kubelet.conf it assumes the node object is already created after TLS bootstrap.

kubeadm will retry to patch the socket on the node object for a while and fail eventually.

make sure you have matching kubelet, apiserver and kubeadm versions.
also look at the full kubelet logs to see if there is a problem around the node object. at the beginning it would complain "not found" but later it should be available.

@neolit123
Copy link
Member

i'd also experiment with removing:

etcd:
  external: # <-------
  ... 

from the kubeadm config and try to call kubeadm init on the host to try to see if kubeadm can create a single node cluster with a managed etcd instance. if that works then the problem is the etcd setup.

if it also fails then it may be related to something in the kubelet / apiserver.

@vupv
Copy link
Author

vupv commented Nov 26, 2021

I try to init with only one node and default config but still get fail.

cluster1.txt

I1126 06:26:48.876863 296691 round_trippers.go:454] GET https://192.168.1.85:6443/api/v1/nodes/master01.fe.me?timeout=10s 404 Not Found in 1 milliseconds
I1126 06:26:48.876873 296691 round_trippers.go:460] Response Headers:
I1126 06:26:48.876878 296691 round_trippers.go:463] Content-Length: 198
I1126 06:26:48.876882 296691 round_trippers.go:463] Date: Fri, 26 Nov 2021 06:26:48 GMT
I1126 06:26:48.876887 296691 round_trippers.go:463] Audit-Id: dc0d4010-bcab-4481-8551-d2d469de3bc0
I1126 06:26:48.876891 296691 round_trippers.go:463] Cache-Control: no-cache, private
I1126 06:26:48.876894 296691 round_trippers.go:463] Content-Type: application/json
I1126 06:26:48.876897 296691 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: f9e5887b-c280-4dc1-8be7-9a907cd1a627
I1126 06:26:48.876901 296691 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: d55c9d4c-8f6d-4731-8de8-bf09693e099c
I1126 06:26:48.877234 296691 request.go:1181] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes "master01.fe.me" not found","reason":"NotFound","details":{"name":"master01.fe.me","kind":"nodes"},"code":404}
timed out waiting for the condition
Error writing Crisocket information for the control-plane node
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runUploadKubeletConfig
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/uploadconfig.go:131

@neolit123
Copy link
Member

neolit123 commented Nov 26, 2021

here are a couple of things to try:

  1. make sure you are not passing the flag --register-node=false to the kubelet (e.g. via systemd files), this would tell the kubelet to not create the node object at all...

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd

  1. try kubeadm init ... with this simple config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.22.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

this would create a single node cluster without an LB. if it passes then the problem is the LB...probably related to HTTPS forwarding or temporary blips (10s timeout for upload config). not a kubeadm issue.

  1. if 2 still doesn't work...call kubeadm reset... then try init with this config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.22.4
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
skipPhases:
- upload-config
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

this would skip the phase that is failing (upload config). more nodes will not be able to join this cluster because of the skipped important phase. if it passes the problem is in the kubelet, share the kubelet logs using journalctl -xeu kubelet.

use sudo watch kubectl --kubeconfig=/etc/kubernetes/admin.conf get no in a parallel terminal to see if the node object is being created while kubeadm init ... is running.

@vupv
Copy link
Author

vupv commented Nov 26, 2021

root@master01:~# kubeadm init --config=cluster2.yaml -v=10
with cluster2.yaml is in option 2 and got error

I1126 14:48:06.011327 43988 round_trippers.go:454] GET https://192.168.1.85:6443/api/v1/nodes/master01.fe.me?timeout=10s 404 Not Found in 3 milliseconds
I1126 14:48:06.011341 43988 round_trippers.go:460] Response Headers:
I1126 14:48:06.011375 43988 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: e53d3b0d-b6b1-46f0-9250-582b1c249944
I1126 14:48:06.011384 43988 round_trippers.go:463] Content-Length: 198
I1126 14:48:06.011388 43988 round_trippers.go:463] Date: Fri, 26 Nov 2021 14:48:06 GMT
I1126 14:48:06.011391 43988 round_trippers.go:463] Audit-Id: b37e5456-5587-4854-b697-c1770669f968
I1126 14:48:06.011394 43988 round_trippers.go:463] Cache-Control: no-cache, private
I1126 14:48:06.011398 43988 round_trippers.go:463] Content-Type: application/json
I1126 14:48:06.011401 43988 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: e2b16734-9cdc-4d54-a79a-eb5edb09979d
I1126 14:48:06.011762 43988 request.go:1181] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes "master01.fe.me" not found","reason":"NotFound","details":{"name":"master01.fe.me","kind":"nodes"},"code":404}
timed out waiting for the condition
Error writing Crisocket information for the control-plane node
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runUploadKubeletConfig
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/c

@vupv
Copy link
Author

vupv commented Nov 26, 2021

root@master01:~# kubeadm init --config=cluster3.yaml -v=10
with cluster3.yaml is in option 3 and got error

I1126 14:55:21.107157 47161 request.go:1181] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes "master01.fe.me" not found","reason":"NotFound","details":{"name":"master01.fe.me","kind":"nodes"},"code":404}
I1126 14:55:21.107605 47161 round_trippers.go:435] curl -v -XGET -H "Accept: application/json, /" -H "User-Agent: kubeadm/v1.22.4 (linux/amd64) kubernetes/b695d79" 'https://192.168.1.85:6443/api/v1/nodes/master01.fe.me?timeout=10s'
I1126 14:55:21.109046 47161 round_trippers.go:454] GET https://192.168.1.85:6443/api/v1/nodes/master01.fe.me?timeout=10s 404 Not Found in 1 milliseconds
I1126 14:55:21.109057 47161 round_trippers.go:460] Response Headers:
I1126 14:55:21.109062 47161 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: 1102c704-fec2-4b8a-bc04-798992fd568d
I1126 14:55:21.109065 47161 round_trippers.go:463] Content-Length: 198
I1126 14:55:21.109069 47161 round_trippers.go:463] Date: Fri, 26 Nov 2021 14:55:21 GMT
I1126 14:55:21.109076 47161 round_trippers.go:463] Audit-Id: 8516a109-a7af-4a00-b656-15a654dbb022
I1126 14:55:21.109079 47161 round_trippers.go:463] Cache-Control: no-cache, private
I1126 14:55:21.109083 47161 round_trippers.go:463] Content-Type: application/json
I1126 14:55:21.109086 47161 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: b26a110e-3104-4109-b170-3ab2cb4f102f
I1126 14:55:21.109478 47161 request.go:1181] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes "master01.fe.me" not found","reason":"NotFound","details":{"name":"master01.fe.me","kind":"nodes"},"code":404}
timed out waiting for the condition
error execution phase mark-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235

@neolit123
Copy link
Member

neolit123 commented Nov 26, 2021

try this config for option 3 instead:

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.22.4
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
skipPhases:
- upload-config
- mark-control-plane
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

also share full kubelet logs as mentioned above.

@vupv
Copy link
Author

vupv commented Nov 26, 2021

kubelet.log

@neolit123
Copy link
Member

did kubeadm still throw errors with the config here?
#2612 (comment)

@vupv
Copy link
Author

vupv commented Nov 26, 2021

kubelet.log last error , let me use this option

did kubeadm still throw errors with the config here? #2612 (comment)

@vupv
Copy link
Author

vupv commented Nov 26, 2021

try this config for option 3 instead:

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.22.4
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
skipPhases:
- upload-config
- mark-control-plane
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

also share full kubelet logs as mentioned above.

Init in this option: Your Kubernetes control-plane has initialized successfully!

what is the difference with others?

@vupv
Copy link
Author

vupv commented Nov 26, 2021

root@master01:~# kubectl get node
No resources found

@neolit123
Copy link
Member

Init in this option: Your Kubernetes control-plane has initialized successfully!

it skips the parts in kubeadm that need the Node object to be created.
but this cluster is unusable because the Node object is needed by kubeadm join

root@master01:~# kubectl get node
No resources found

are you following this guide for installing the packages?
e.g. using apt-get install kubeadm...
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl

@vupv
Copy link
Author

vupv commented Nov 27, 2021

Yes, I was installed follow these steps

@neolit123
Copy link
Member

neolit123 commented Nov 27, 2021

in the kubelet logs i see some strange errors related to container sandboxes.

i also see this:

Nov 23 08:05:57 master01.fe.me kubelet[67349]: W1123 08:05:57.698918 67349 watcher.go:95] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice: no such file or directory

this is an indication that the systemd cgroup driver might not be working for this docker well.

you can try to remove the "exec-opts": ["native.cgroupdriver=systemd"], line from the docker config and restart docker.
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

then try kubeadm init with this config:

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.22.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: cgroupfs

NOTE!
since the built-in docker support in kubelet (dockershim) is being removed soon, docker is not a recommended CR for the time being.

you should use containerd or cri-o with a systemd driver for new clusters.
https://kubernetes.io/docs/setup/production-environment/container-runtimes/
make sure you uninstall docker first.


there are also errors related to the kubelet not being able to talk to https://192.168.1.88:6443 which comes from the kube-apiserver container. the node object is not being registered properly because of that.
yet, earlier you have shown that this api server is working which is weird...

as a side note, always make sure you call kubeadm reset on the node before kubeadm join or kubeadm init.

@neolit123
Copy link
Member

NOTE!
since the built-in docker support in kubelet (dockershim) is being removed soon, docker is not a recommended CR for the time being.

https://kubernetes.io/blog/2020/12/02/dockershim-faq/

@vupv
Copy link
Author

vupv commented Nov 27, 2021

Hi Ivan,
I have removed the docker and used containerd with a systemd driver for new clusters.
default_config_init.log
And the Kubernetes control-plane has initialized successfully!

root@master01:# kubectl get node
NAME STATUS ROLES AGE VERSION
master01.fe.me Ready control-plane,master 29m v1.22.4
root@master01:
#

@vupv
Copy link
Author

vupv commented Nov 27, 2021

In case I would like to set up multi-master node with external etcd. Could you please advise the init config

@vupv vupv closed this as completed Nov 27, 2021
@vupv
Copy link
Author

vupv commented Nov 27, 2021

Uploading cluster.txt…

@vupv vupv reopened this Nov 27, 2021
@vupv
Copy link
Author

vupv commented Nov 27, 2021

In case I would like to set up multi-master node with external etcd. Could you please advise the init config

@vupv
Copy link
Author

vupv commented Nov 27, 2021

cluster.txt

@vupv
Copy link
Author

vupv commented Nov 27, 2021

I1127 12:19:21.354945 403780 round_trippers.go:454] GET https://192.168.1.88:6443/api/v1/nodes/master01.fe.me?timeout=10s 404 Not Found in 5 milliseconds
I1127 12:19:21.354975 403780 round_trippers.go:460] Response Headers:
I1127 12:19:21.354980 403780 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: 930f8a31-6773-4edf-9e51-acde47d118a5
I1127 12:19:21.355011 403780 round_trippers.go:463] Content-Length: 198
I1127 12:19:21.355016 403780 round_trippers.go:463] Date: Sat, 27 Nov 2021 12:19:21 GMT
I1127 12:19:21.355019 403780 round_trippers.go:463] Audit-Id: 58246fc2-761a-4943-9a72-501b842ef26c
I1127 12:19:21.355023 403780 round_trippers.go:463] Cache-Control: no-cache, private
I1127 12:19:21.355026 403780 round_trippers.go:463] Content-Type: application/json
I1127 12:19:21.355030 403780 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: bca17f72-03c8-467f-8875-6d56b570935d
I1127 12:19:21.355137 403780 request.go:1181] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes "master01.fe.me" not found","reason":"NotFound","details":{"name":"master01.fe.me","kind":"nodes"},"code":404}
timed out waiting for the condition
Error writing Crisocket information for the control-plane node
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runUploadKubeletConfig
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/uploadconfig.go:131
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234

@neolit123
Copy link
Member

neolit123 commented Nov 27, 2021 via email

@k8s-ci-robot
Copy link
Contributor

@neolit123: Closing this issue.

In response to this:

If containerd works for you great...make sure you install it on all nides.
Might be a good idea fir us to update the troubleshooting guide about this
docker problem.

Our HA docs are here:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

For additional questions please use the support forums:
https://github.com/kubernetes/kubeadm#support

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@vupv
Copy link
Author

vupv commented Nov 27, 2021

ful_log_init.log

@LonersBoy
Copy link

LonersBoy commented Dec 9, 2022

there are also errors related to the kubelet not being able to talk to https://192.168.1.88:6443 which comes from the kube-apiserver container. the node object is not being registered properly because of that. yet, earlier you have shown that this api server is working which is weird...

I'm met the same issue now,the node object is not being registered
used CR is contained not docker
I want install kubernetes 1.25.4

@LonersBoy
Copy link

  1. try kubeadm init ... with this simple config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.22.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

this would create a single node cluster without an LB. if it passes then the problem is the LB...probably related to HTTPS forwarding or temporary blips (10s timeout for upload config). not a kubeadm issue.

Init in this option:

[root@host50 ~]# kubeadm init --config test-config.yaml --upload-certs --ignore-preflight-errors=all -v=6
I1209 16:44:56.894136   34677 initconfiguration.go:254] loading configuration from "test-config.yaml"
W1209 16:44:56.895600   34677 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/containerd/containerd.sock". Please update your configuration!
I1209 16:44:56.895850   34677 interface.go:432] Looking for default routes with IPv4 addresses
I1209 16:44:56.895862   34677 interface.go:437] Default route transits interface "ens32"
I1209 16:44:56.896191   34677 interface.go:209] Interface ens32 is up
I1209 16:44:56.896255   34677 interface.go:257] Interface "ens32" has 3 addresses :[192.168.188.50/24 192.168.188.60/24 fe80::d33c:34dc:2a06:2c13/64].
I1209 16:44:56.896296   34677 interface.go:224] Checking addr  192.168.188.50/24.
I1209 16:44:56.896307   34677 interface.go:231] IP found 192.168.188.50
I1209 16:44:56.896317   34677 interface.go:263] Found valid IPv4 address 192.168.188.50 for interface "ens32".
I1209 16:44:56.896326   34677 interface.go:443] Found active IP 192.168.188.50
[init] Using Kubernetes version: v1.25.4
[preflight] Running pre-flight checks
I1209 16:44:56.900489   34677 checks.go:568] validating Kubernetes and kubeadm version
I1209 16:44:56.900516   34677 checks.go:168] validating if the firewall is enabled and active
I1209 16:44:56.909617   34677 checks.go:203] validating availability of port 6443
I1209 16:44:56.910090   34677 checks.go:203] validating availability of port 10259
I1209 16:44:56.910131   34677 checks.go:203] validating availability of port 10257
I1209 16:44:56.910169   34677 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1209 16:44:56.910185   34677 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1209 16:44:56.910196   34677 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1209 16:44:56.910207   34677 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1209 16:44:56.910225   34677 checks.go:430] validating if the connectivity type is via proxy or direct
I1209 16:44:56.910269   34677 checks.go:469] validating http connectivity to first IP address in the CIDR
I1209 16:44:56.910285   34677 checks.go:469] validating http connectivity to first IP address in the CIDR
I1209 16:44:56.910297   34677 checks.go:104] validating the container runtime
I1209 16:44:56.942849   34677 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1209 16:44:56.942917   34677 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1209 16:44:56.942963   34677 checks.go:644] validating whether swap is enabled or not
I1209 16:44:56.943024   34677 checks.go:370] validating the presence of executable crictl
I1209 16:44:56.943052   34677 checks.go:370] validating the presence of executable conntrack
I1209 16:44:56.943071   34677 checks.go:370] validating the presence of executable ip
I1209 16:44:56.943088   34677 checks.go:370] validating the presence of executable iptables
I1209 16:44:56.943108   34677 checks.go:370] validating the presence of executable mount
I1209 16:44:56.943131   34677 checks.go:370] validating the presence of executable nsenter
I1209 16:44:56.943152   34677 checks.go:370] validating the presence of executable ebtables
I1209 16:44:56.943170   34677 checks.go:370] validating the presence of executable ethtool
I1209 16:44:56.943188   34677 checks.go:370] validating the presence of executable socat
I1209 16:44:56.943208   34677 checks.go:370] validating the presence of executable tc
I1209 16:44:56.943225   34677 checks.go:370] validating the presence of executable touch
I1209 16:44:56.943246   34677 checks.go:516] running all checks
I1209 16:44:56.952691   34677 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I1209 16:44:56.952874   34677 checks.go:610] validating kubelet version
I1209 16:44:57.020654   34677 checks.go:130] validating if the "kubelet" service is enabled and active
I1209 16:44:57.030251   34677 checks.go:203] validating availability of port 10250
I1209 16:44:57.030313   34677 checks.go:203] validating availability of port 2379
I1209 16:44:57.030351   34677 checks.go:203] validating availability of port 2380
I1209 16:44:57.030389   34677 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1209 16:44:57.030542   34677 checks.go:832] using image pull policy: IfNotPresent
I1209 16:44:57.055656   34677 checks.go:841] image exists: registry.aliyuncs.com/google_containers/kube-apiserver:v1.25.4
I1209 16:44:57.079615   34677 checks.go:841] image exists: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.25.4
I1209 16:44:57.103462   34677 checks.go:841] image exists: registry.aliyuncs.com/google_containers/kube-scheduler:v1.25.4
I1209 16:44:57.127201   34677 checks.go:841] image exists: registry.aliyuncs.com/google_containers/kube-proxy:v1.25.4
I1209 16:44:57.150770   34677 checks.go:841] image exists: registry.aliyuncs.com/google_containers/pause:3.8
I1209 16:44:57.174404   34677 checks.go:841] image exists: registry.aliyuncs.com/google_containers/etcd:3.5.5-0
I1209 16:44:57.198114   34677 checks.go:841] image exists: registry.aliyuncs.com/google_containers/coredns:v1.9.3
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1209 16:44:57.198178   34677 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1209 16:44:57.651627   34677 certs.go:522] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [host50 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.188.50]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1209 16:44:58.605342   34677 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1209 16:44:58.772230   34677 certs.go:522] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1209 16:44:58.925411   34677 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1209 16:44:59.099971   34677 certs.go:522] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [host50 localhost] and IPs [192.168.188.50 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [host50 localhost] and IPs [192.168.188.50 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1209 16:45:00.417303   34677 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1209 16:45:00.753748   34677 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1209 16:45:01.211824   34677 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1209 16:45:01.495052   34677 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1209 16:45:01.594941   34677 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I1209 16:45:02.017576   34677 kubelet.go:66] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1209 16:45:02.125981   34677 manifests.go:99] [control-plane] getting StaticPodSpecs
I1209 16:45:02.126285   34677 certs.go:522] validating certificate period for CA certificate
I1209 16:45:02.126366   34677 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1209 16:45:02.126378   34677 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I1209 16:45:02.126388   34677 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1209 16:45:02.129659   34677 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1209 16:45:02.129696   34677 manifests.go:99] [control-plane] getting StaticPodSpecs
I1209 16:45:02.129911   34677 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1209 16:45:02.129931   34677 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I1209 16:45:02.129944   34677 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1209 16:45:02.129955   34677 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1209 16:45:02.129964   34677 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1209 16:45:02.130656   34677 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1209 16:45:02.130677   34677 manifests.go:99] [control-plane] getting StaticPodSpecs
I1209 16:45:02.130873   34677 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1209 16:45:02.131466   34677 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1209 16:45:02.132182   34677 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I1209 16:45:02.132197   34677 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
I1209 16:45:02.132821   34677 loader.go:374] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1209 16:45:02.133859   34677 round_trippers.go:553] GET https://192.168.188.50:6443/healthz?timeout=10s  in 0 milliseconds
I1209 16:45:02.634824   34677 round_trippers.go:553] GET https://192.168.188.50:6443/healthz?timeout=10s  in 0 milliseconds
I1209 16:45:03.134618   34677 round_trippers.go:553] GET https://192.168.188.50:6443/healthz?timeout=10s  in 0 milliseconds
I1209 16:45:03.634888   34677 round_trippers.go:553] GET https://192.168.188.50:6443/healthz?timeout=10s  in 0 milliseconds
I1209 16:45:06.364551   34677 round_trippers.go:553] GET https://192.168.188.50:6443/healthz?timeout=10s 500 Internal Server Error in 2230 milliseconds
I1209 16:45:06.635808   34677 round_trippers.go:553] GET https://192.168.188.50:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I1209 16:45:07.135694   34677 round_trippers.go:553] GET https://192.168.188.50:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I1209 16:45:07.635351   34677 round_trippers.go:553] GET https://192.168.188.50:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I1209 16:45:08.136266   34677 round_trippers.go:553] GET https://192.168.188.50:6443/healthz?timeout=10s 200 OK in 1 milliseconds
[apiclient] All control plane components are healthy after 6.002864 seconds
I1209 16:45:08.136356   34677 uploadconfig.go:110] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1209 16:45:08.139107   34677 round_trippers.go:553] POST https://192.168.188.50:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 milliseconds
I1209 16:45:08.141405   34677 round_trippers.go:553] POST https://192.168.188.50:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 1 milliseconds
I1209 16:45:08.143501   34677 round_trippers.go:553] POST https://192.168.188.50:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 1 milliseconds
I1209 16:45:08.143642   34677 uploadconfig.go:124] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1209 16:45:08.145857   34677 round_trippers.go:553] POST https://192.168.188.50:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 1 milliseconds
I1209 16:45:08.147751   34677 round_trippers.go:553] POST https://192.168.188.50:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 1 milliseconds
I1209 16:45:08.149721   34677 round_trippers.go:553] POST https://192.168.188.50:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 1 milliseconds
I1209 16:45:08.149832   34677 uploadconfig.go:129] [upload-config] Preserving the CRISocket information for the control-plane node
I1209 16:45:08.149858   34677 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "host50" as an annotation
I1209 16:45:08.652703   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 2 milliseconds
I1209 16:45:09.151598   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:09.651923   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:10.152259   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:10.652298   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:11.152449   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:11.651499   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:12.151418   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:12.651360   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:13.151485   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:13.651705   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:14.152038   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:14.652444   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:15.151469   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:15.651454   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:16.151717   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:16.651680   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:17.151739   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:17.651692   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:18.151813   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:18.652183   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:19.152427   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:19.651804   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:20.151566   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:20.651536   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:21.151556   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:21.652338   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:22.151715   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:22.651629   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:23.151413   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:23.651916   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:24.152402   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:24.651984   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:25.152305   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:25.652412   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:26.151756   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:26.651777   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds
I1209 16:45:27.151800   34677 round_trippers.go:553] GET https://192.168.188.50:6443/api/v1/nodes/host50?timeout=10s 404 Not Found in 1 milliseconds

@cyford
Copy link

cyford commented Aug 13, 2023

I believe my reason was becuase i used the --node-name $NODENAME which was adding my host name and and my hostname was uppercase which i think it may of not supported.
removing --node-name switch worked

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

5 participants