Outputs: StartRuleName: Description: "The name of the EventBridge rule for EC2 start." Value: !Ref StartRule StopRuleName: Description: "The name of the EventBridge rule for EC2 stop." Value: !Ref StopRule IAMRoleName: Description: "The IAM role used by EventBridge for EC2 start/stop automation." Value: !Ref EC2StartStopRole
pi@node01:~ $ sudo kubeadm init --apiserver-advertise-address=192.168.11.201 --pod-network-cidr=10.244.0.0/16 [init] Using Kubernetes version: v1.26.1 [preflight] Running pre-flight checks [WARNING SystemVerification]: missing optional cgroups: hugetlb [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node01] and IPs [10.96.0.1 XXX.YYY.ZZZ.201] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost node01] and IPs [192.168.11.201 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost node01] and IPs [192.168.11.201 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 34.004780 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node node01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node node01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: xua9wk.gwe59et1dix4dw1q [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
pi@node01:~ $kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml namespace/kube-flannel created serviceaccount/flannel created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created pi@node01:~ $
pi@node02:~ $ sudo kubeadm join 192.168.11.201:6443 --token 0hfvvz.63g939bbbv4iuoqz \ --discovery-token-ca-cert-hash sha256:e4fad29f4cadb1fc68464bbae4767e18766365d1fa8c46678301c26a1f0911c7 [preflight] Running pre-flight checks [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING SystemVerification]: missing optional cgroups: hugetlb [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' error execution phase kubelet-start: timed out waiting for the condition To see the stack trace of this error execute with --v=5 or higher
このあとSwap無効化してkubeadm resetしたあと、再度kubeadm joinしたらkubectl get nodeにReadyで登場するようになりました。
再度書きますが、Raspberry Pi 3B+ をkubernetesクラスタに加えることは、ほぼできないと考えておいたほうが良いです。インストールはできますが、それまでです。書籍の学習でちょっとうごかしてみよう的な環境でも落ちるかもしれません。そうしたらsyslogやjouralctlとにらめっこが始まってしまい、学習は中断..という風になるでしょう。そういう場合はDocker Desktop(いつまでKubernetesが動くか..)やplay with kubernetes 環境(いつまで..再び)を使いましょう。
This is a sandbox environment. Using personal credentials is HIGHLY! discouraged. Any consequences of doing so, are completely the user's responsibilites.
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.X:6443 --token XXXX...XXX \ --discovery-token-ca-cert-hash sha256:XXXX...XXX Waiting for api server to startup Warning: resource daemonsets/kube-proxy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. daemonset.apps/kube-proxy configured No resources found [node1 ~]$
[node2 ~]$ kubeadm join 192.168.0.X:6443 --token XXX...XXX --discovery-token-ca-cert-hash sha256:XXXXXX....XXXXX Initializing machine ID from random generator. [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not active, please run 'systemctl start docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 4.4.0-210-generic DOCKER_VERSION: 20.10.1 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03 [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
control-plane の 'kubectl get nodes' を実行して、このノードがクラスタに参加したことを確認します。
node1 にもどり kubectl get nodes でmaster以外のノードnode2が追加されていることを確認
このあとnode1で2と3を実行し kubectl get pod すれば Pending ではなく Ready になる。
[node1 ~]$ kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml configmap/kube-router-cfg created daemonset.apps/kube-router created serviceaccount/kube-router created clusterrole.rbac.authorization.k8s.io/kube-router created clusterrolebinding.rbac.authorization.k8s.io/kube-router created [node1 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION node1 NotReady control-plane,master 12m v1.20.1 node2 NotReady <none> 5m1s v1.20.1 [node1 ~]$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx-app.yaml service/my-nginx-svc created deployment.apps/my-nginx created [node1 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready control-plane,master 12m v1.20.1 node2 Ready <none> 5m28s v1.20.1 [node1 ~]$ kubectl get pod NAME READY STATUS RESTARTS AGE my-nginx-66b6c48dd5-kwx58 1/1 Running 0 34m my-nginx-66b6c48dd5-qw9zf 1/1 Running 0 34m my-nginx-66b6c48dd5-r7n5p 1/1 Running 0 34m
Play with Kubernetes環境は、使用中もとても遅くなることがあり、Pendingになっている時間が結構かかる場合もある。なのでゆっくりめのトイレタイムをあけて再度kubectl get podすると動いている場合がある。自分はだめな場合と判断しているのは、2トイレタイムくらい間を空けてだめなときだ。
Play with Kubernetes 環境はおそすぎてADD NEW INSTANCEしても無視することがある。そういった場合はセッションをCLOSEしてしばらく待ってから再度Loginしてみる。
数値標高モデルは、いわゆるDEM: Digital Elevation Model というやつで、日本の国土の地面の標高(m)を5m/10mメッシュ単位にまとめたもので、XML形式でダウンロードできます。 この数値標高モデルのXMLはGMLベースですが、標高が格納されている要素内に直列でならんでいたり、ほかの要素や属性値を使わないときちんとデータとして使用できないので、しっかり仕様を読み込んでパース処理を実装しなくてはなりません。
# 指定秒数の音声をchunkサイズごとに取得し、配列framesへ追加 max_count = int((44100 / 4096) * record_secs) for i in range(0, max_count): frames = [] # IOError対策 exception_on_overflow=False frames.append(stream.read(chunk, exception_on_overflow=False)) wavefile.writeframes(b''.join(frames)) if i != 0 and i % 100 == 0 and debug: print(f'wrote {i}/{max_count} frame(s).')