Home | José Ramón López | AutoSys | Webs de interés | perl | kubernetes | azure | machine learning



Table of Contents

1          Documentación. 2

2          Instalo un cluster en Azure. 3

2.1      Creo las VMs en Azure. 3

2.2      Instalo el control plane. 3

 


1      Documentación

AKS: https://azure.microsoft.com/es-es/products/kubernetes-service

 

K8S: https://kubernetes.io/docs/home/

2      Instalo un cluster en Azure

2.1   Creo las VMs en Azure

Creo un cluster en Azure de 2 VMs

 

Cp                   Linux (ubuntu 20.04)               Standard B2s (2 vcpus, 4 GiB memory)

Worker             Linux (ubuntu 20.04)               Standard B2s (2 vcpus, 4 GiB memory)

 

2.2   Instalo el control plane

Casi todos los comandos en cp como root

 

Actualizo el OS

apt-get update && apt-get upgrade -y

 

Instalo docker

apt-get install -y docker.io

 

Añado el repo de kubernetes

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

 

intalo paquetes necesarios

sudo apt-get install -y apt-transport-https ca-certificates curl gpg

 

guardo la clave pública del repositorio

mkdir -p -m 755 /etc/apt/keyrings

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

 

Añado a mis repositories el de kubernetes

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

 

 

instalo kubelet, kubeadm y kubectl

apt-get update

apt-get install -y kubelet kubeadm kubectl

 

evito que kubelet, kubeadm y kubectl se actualicen

apt-mark hold kubelet kubeadm kubectl

kubelet set on hold.

kubeadm set on hold.

kubectl set on hold.

 

obetngo calico

wget https://docs.projectcalico.org/manifests/calico.yaml

 

configuro calico

vim calico.yaml

Descomento

- name: CALICO_IPV4POOL_CIDR

  value: "192.168.0.0/16"

 

añado mi ip privada al /etc/hosts

10.0.0.4 cp

 

Reconfiguro Docker

vim /etc/docker/daemon.json

 

reinicio Docker

systemctl restart docker ; sleep 20 ; systemctl status docker

docker.service - Docker Application Container Engine

     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)

     Active: active (running) since Sat 2024-05-18 07:27:00 UTC; 20s ago

TriggeredBy: docker.socket

       Docs: https://docs.docker.com

   Main PID: 8079 (dockerd)

      Tasks: 9

     Memory: 30.1M

     CGroup: /system.slice/docker.service

             └─8079 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

 

May 18 07:27:00 cp dockerd[8079]: time="2024-05-18T07:27:00.171189741Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.>

May 18 07:27:00 cp dockerd[8079]: time="2024-05-18T07:27:00.184305878Z" level=info msg="[graphdriver] trying configured driver: overlay2"

May 18 07:27:00 cp dockerd[8079]: time="2024-05-18T07:27:00.210454950Z" level=info msg="Loading containers: start."

May 18 07:27:00 cp dockerd[8079]: time="2024-05-18T07:27:00.313667411Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set>

May 18 07:27:00 cp dockerd[8079]: time="2024-05-18T07:27:00.366920972Z" level=info msg="Loading containers: done."

May 18 07:27:00 cp dockerd[8079]: time="2024-05-18T07:27:00.406745391Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CO>

May 18 07:27:00 cp dockerd[8079]: time="2024-05-18T07:27:00.407474104Z" level=info msg="Docker daemon" commit="24.0.5-0ubuntu1~20.04.1" graphdriver=overlay2 version=24.0.5

May 18 07:27:00 cp dockerd[8079]: time="2024-05-18T07:27:00.407676907Z" level=info msg="Daemon has completed initialization"

May 18 07:27:00 cp dockerd[8079]: time="2024-05-18T07:27:00.432816361Z" level=info msg="API listen on /run/docker.sock"

May 18 07:27:00 cp systemd[1]: Started Docker Application Container Engine.

 

 

Inicializo el cluster

kubeadm init --pod-network-cidr 192.168.0.0/16 --control-plane-endpoint "cp:6443"

[init] Using Kubernetes version: v1.30.1

[preflight] Running pre-flight checks

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

W0518 07:49:25.150533    8267 checks.go:844] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.9" as the CRI sandbox image.

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [cp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.4]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [cp localhost] and IPs [10.0.0.4 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [cp localhost] and IPs [10.0.0.4 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "super-admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

[control-plane] Creating static Pod manifest for "kube-scheduler"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"

[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s

[kubelet-check] The kubelet is healthy after 1.001796264s

[api-check] Waiting for a healthy API server. This can take up to 4m0s

[api-check] The API server is healthy after 7.002138373s

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see --upload-certs

[mark-control-plane] Marking the node cp as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]

[mark-control-plane] Marking the node cp as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

[bootstrap-token] Using token: bni71d.jgq28u8imd7am5dz

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

 

Your Kubernetes control-plane has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

Alternatively, if you are the root user, you can run:

 

  export KUBECONFIG=/etc/kubernetes/admin.conf

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of control-plane nodes by copying certificate authorities

and service account keys on each node and then running the following as root:

 

  kubeadm join cp:6443 --token bni71d.jgq28u8imd7am5dz \

        --discovery-token-ca-cert-hash sha256:760b2526a8d01df92e8c584173c16dbf2abff2cb5a2b1507c839abbc92b58bc7 \

        --control-plane

 

Then you can join any number of worker nodes by running the following on each as root:

 

kubeadm join cp:6443 --token bni71d.jgq28u8imd7am5dz \

        --discovery-token-ca-cert-hash sha256:760b2526a8d01df92e8c584173c16dbf2abff2cb5a2b1507c839abbc92b58bc7

 

 

 

El token de autenticaion bni71d.jgq28u8imd7am5dz, del comando anterior se pude ver con

kubeadm token list

TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS

bni71d.jgq28u8imd7am5dz   23h         2024-05-19T07:50:04Z   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

 

Permito a usuarios no-root acceso de admin a kubernetes ** como mi usuario **

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

 

aplico la configuración de red

sudo cp /root/calico.yaml .

kubectl apply -f calico.yaml

poddisruptionbudget.policy/calico-kube-controllers created

serviceaccount/calico-kube-controllers created

serviceaccount/calico-node created

configmap/calico-config created

customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created

clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrole.rbac.authorization.k8s.io/calico-node created

clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrolebinding.rbac.authorization.k8s.io/calico-node created

daemonset.apps/calico-node created

deployment.apps/calico-kube-controllers created

 

2.3   preparo la auto-completion

sudo apt-get install bash-completion -y

 

logoff y login

 

source <(kubectl completion bash)

 

echo "source <(kubectl completion bash)" >> $HOME/.bashrc

 

 

2.4   añado workers

actualizo el OS

apt-get update && apt-get upgrade -y

 

instalo docker

apt-get install -y docker.io

 

Añado el repo de kubernetes

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

 

intalo paquetes necesarios

sudo apt-get install -y apt-transport-https ca-certificates curl gpg

 

guardo la clave pública del repositorio

mkdir -p -m 755 /etc/apt/keyrings

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

 

instalo kubelet, kubeadm y kubectl

apt-get update

apt-get install -y kubelet kubeadm kubectl

 

evito que kubelet, kubeadm y kubectl se actualicen

 

 

apt-mark hold kubelet kubeadm kubectl

kubelet set on hold.

kubeadm set on hold.

kubectl set on hold.

 

Añado la ip del cp al /etc/hosts del worker

10.0.0.4 cp

 

Añado el worker * si el tocken es todavía válido *

kubeadm join cp:6443 --token bni71d.jgq28u8imd7am5dz --discovery-token-ca-cert-hash sha256:760b2526a8d01df92e8c584173c16dbf2abff2cb5a2b1507c839abbc92b58bc7

 

compruebo que se ha añadido, el en cp

kubectl get nodes

NAME     STATUS     ROLES           AGE   VERSION

cp       Ready      control-plane   23m   v1.30.1

worker   NotReady   <none>          26s   v1.30.1

 

Añado el worker * si el tocken no es válido *

En cp creo un nuevo token

sudo kubeadm token créate

27eee4.6e66ff60318da929

 

Creo el hash en cp

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/ˆ.* //'

(stdin)= 6d541678b05652e1fa5d43908e75e67376e994c3483d6683f2a18673e5d2a1b0

 

Use the token and hash, in this case as sha256:long-hash to join the cluster from the second/worker node. Use the

private IP address of the cp server and port 6443. The output of the kubeadm init on the cp also has an example to

añado el nuevo worker

kubeadm join --token 27eee4.6e66ff60318da929 k8scp:6443

--discovery-token-ca-cert-hash

sha256:6d541678b05652e1fa5d43908e75e67376e994c3483d6683f2a18673e5d2a1b0