Hi all, i have a cluster where all certificate expired yesterday
so this morning i renew all certificate using command kubeadm certs renew all on the master, after that the apiserver and other component started normally.
but now when i try to delete or update a resource the pod is recreated and when i delete a pod in a deployment procedure it is not recreated, i also tried to add a new node to the cluster and got this error [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "xeyhmo", will try again
hi. this might indicate problems with kube-controller-manager.
make sure you restarted all components including the KCM once “certs renew” finished. KCM needs only its kubeconfig file, the CA.key/CA.cert as well.
the JWS error means that the KCM is updating a particular token ID. if that is a valid / non-expired token, then something is not OK at the KCM.
the pod deletion problems could indicate that something is wrong in e.g. the replicaset controller at the KCM.
check if KCM is crasdhing/restarting and look at its logs.
After running the command you should restart the control plane Pods. This is required since dynamic certificate reload is currently not supported for all components and certificates. Static Pods are managed by the local kubelet and not by the API Server, thus kubectl cannot be used to delete and restart them. To restart a static Pod you can temporarily remove its manifest file from /etc/kubernetes/manifests/ and wait for 20 seconds (see the fileCheckFrequency value in KubeletConfiguration struct. The kubelet will terminate the Pod if it’s no longer in the manifest directory. You can then move the file back and after another fileCheckFrequency period, the kubelet will recreate the Pod and the certificate renewal for the component can complete.