weavenet installation error from setup_weave.sh
Hi,
I am trying to install weavenet on a VM which I want to attach as a worker node to my master node.
Below are the steps I did:
- Installed Docker 2, Installed Kubernetes
- Install WeaveNet on Master node.
- Did Kubeadm init on master node
Before running Kubeadm join, I am installing WeaveNet on my worker VM using the below steps:
- Installed Docker
- Installed Kubernetes
- Install weavenet. Here I am getting below installation error:
`[root@host-172-19-104-119 script]# ./setup_weave.sh rsu203 Create /etc/weave.conf PEERS="masternode" IP_RANGE="172.30.0.0/16" Download the WeaveNet software % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 100 629 100 629 0 0 224 0 0:00:02 0:00:02 --:--:-- 224 100 51395 100 51395 0 0 10774 0 0:00:04 0:00:04 --:--:-- 37215 Setup weave.service [Unit] Description=Weave Network Documentation=http://docs.weave.works/weave/latest_release/ Requires=docker.service After=docker.service Before=kubelet.service
[Service] EnvironmentFile=/etc/weave.conf Environment="CHECKPOINT_DISABLE=1" ExecStartPre=/usr/local/bin/weave reset --force ExecStartPre=/usr/local/bin/weave launch --no-restart --name fa:16:3e:5f:0d:79 --ipalloc-range=${IP_RANGE} $PEERS ExecStartPre=/bin/sh -c 'echo KUBELET_EXTRA_ARGS="--node-ip=$$(/usr/local/bin/weave expose)" > /etc/default/kubelet' ExecStart=/usr/bin/docker attach weave ExecStop=/usr/local/bin/weave stop
[Install] WantedBy=multi-user.target Enable weave.service Start weave.service Job for weave.service failed because the control process exited with error code. See "systemctl status weave.service" and "journalctl -xe" for details.`
Output of journalctl -xe:
Sep 28 11:18:26 host-172-19-104-119 NetworkManager[1100]: <info> [1601306306.5262] device (weave): state change: ip-config -> ip-check (reason 'none Sep 28 11:18:26 host-172-19-104-119 NetworkManager[1100]: <info> [1601306306.5267] device (vxlan-6784): state change: unmanaged -> unavailable (reas Sep 28 11:18:26 host-172-19-104-119 NetworkManager[1100]: <info> [1601306306.5274] device (vxlan-6784): enslaved to non-master-type device datapath; Sep 28 11:18:26 host-172-19-104-119 NetworkManager[1100]: <info> [1601306306.5275] device (weave): state change: ip-check -> secondaries (reason 'no Sep 28 11:18:26 host-172-19-104-119 NetworkManager[1100]: <info> [1601306306.5277] device (weave): state change: secondaries -> activated (reason 'n Sep 28 11:18:26 host-172-19-104-119 NetworkManager[1100]: <info> [1601306306.5313] device (weave): Activation: successful, device activated. Sep 28 11:18:26 host-172-19-104-119 NetworkManager[1100]: <info> [1601306306.5318] device (vxlan-6784): state change: unavailable -> disconnected (r Sep 28 11:18:26 host-172-19-104-119 nm-dispatcher[4971]: req:2 'up' [weave]: new request (3 scripts) Sep 28 11:18:26 host-172-19-104-119 nm-dispatcher[4971]: req:2 'up' [weave]: start running ordered scripts... Sep 28 11:18:26 host-172-19-104-119 dockerd[1395]: time="2020-09-28T11:18:26-04:00" level=info msg="shim reaped" id=54fb30135556c861d73bef2f8ff0a4b0d Sep 28 11:18:26 host-172-19-104-119 dockerd[1395]: time="2020-09-28T11:18:26.690733486-04:00" level=info msg="ignoring event" module=libcontainerd na Sep 28 11:18:26 host-172-19-104-119 weave[5087]: The weave container has died. Consult the container logs for further details. Sep 28 11:18:26 host-172-19-104-119 dockerd[1395]: time="2020-09-28T11:18:26-04:00" level=info msg="shim reaped" id=ddc1677f357fe51de77cc7c885bd713a5 Sep 28 11:18:26 host-172-19-104-119 dockerd[1395]: time="2020-09-28T11:18:26.885422562-04:00" level=info msg="ignoring event" module=libcontainerd na Sep 28 11:18:26 host-172-19-104-119 systemd[1]: weave.service: control process exited, code=exited status=1 Sep 28 11:18:26 host-172-19-104-119 systemd[1]: Failed to start Weave Network.
Any idea what is it that causes this issue?
Check the logs in the weavenet container: docker logs -f
Check the logs in the weavenet container: docker logs -f . See if there is anything wrong.
Here are the weave container logs:
root@vcac-node:~# docker logs -f 363fdcac6036 INFO: 2020/09/30 13:12:45.623058 Command line options: map[H:[unix:///var/run/weave/weave.sock] datapath:datapath dns-listen-address:172.17.0.1:53 docker-bridge:docker0 host-root:/host http-addr:127.0.0.1:6784 ipalloc-range:10.244.0.0/16 name:00:e0:ec:40:d1:29 nickname:vcac-node plugin:true port:6783 proxy:true resolv-conf:/var/run/weave/etc/stub-resolv.conf status-addr:127.0.0.1:6782 weave-bridge:weave] INFO: 2020/09/30 13:12:45.623304 weave 2.7.0 INFO: 2020/09/30 13:12:45.624853 Docker API on unix:///var/run/docker.sock: &[BuildTime=2018-07-18T19:07:56.000000000+00:00 Platform={"Name":""} Components=[{"Details":{"ApiVersion":"1.38","Arch":"amd64","BuildTime":"2018-07-18T19:07:56.000000000+00:00","Experimental":"false","GitCommit":"0ffa825","GoVersion":"go1.10.3","KernelVersion":"5.3.18-1.b4f0e4e.vca+","MinAPIVersion":"1.12","Os":"linux"},"Name":"Engine","Version":"18.06.0-ce"}] Version=18.06.0-ce MinAPIVersion=1.12 Arch=amd64 KernelVersion=5.3.18-1.b4f0e4e.vca+ ApiVersion=1.38 GitCommit=0ffa825 GoVersion=go1.10.3 Os=linux] INFO: 2020/09/30 13:12:45.629918 proxy listening on unix:///var/run/weave/weave.sock INFO: 2020/09/30 13:12:45.635257 failed to create weave-test-commentd171e73e; disabling comment support WARN: 2020/09/30 13:12:45.640903 Skipping bridge creation of "bridged_fastdp" due to: : bridge not supported FATA: 2020/09/30 13:12:45.642680 creating dummy interface: operation not supported
For the dummy interface, when checking the kernel config ,the VCAC kernel configuration shows CONFIG_DUMMY: missing :
`Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_NF_NAT_IPV4: missing
- CONFIG_IP_NF_FILTER: enabled
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled
- CONFIG_NETFILTER_XT_MATCH_IPVS: enabled (as module)
- CONFIG_IP_NF_NAT: enabled (as module)
- CONFIG_NF_NAT: enabled
- CONFIG_NF_NAT_NEEDED: missing
- CONFIG_POSIX_MQUEUE: enabled
Optional Features:
- CONFIG_USER_NS: missing
- CONFIG_SECCOMP: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_MEMCG_SWAP: enabled
- CONFIG_MEMCG_SWAP_ENABLED: enabled (cgroup swap accounting is currently enabled)
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_IOSCHED_CFQ: missing
- CONFIG_CFQ_GROUP_IOSCHED: missing
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: missing
- CONFIG_CGROUP_NET_PRIO: missing
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: enabled
- CONFIG_IP_VS: enabled (as module)
- CONFIG_IP_VS_NFCT: enabled
- CONFIG_IP_VS_RR: enabled (as module)
- CONFIG_EXT4_FS: enabled (as module)
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
- "overlay":
- CONFIG_VXLAN: enabled (as module)
Optional (for encrypted networks):
- CONFIG_CRYPTO: enabled
- CONFIG_CRYPTO_AEAD: enabled
- CONFIG_CRYPTO_GCM: enabled
- CONFIG_CRYPTO_SEQIV: enabled
- CONFIG_CRYPTO_GHASH: enabled
- CONFIG_XFRM: enabled
- CONFIG_XFRM_USER: enabled
- CONFIG_XFRM_ALGO: enabled
- CONFIG_INET_ESP: missing
- CONFIG_INET_XFRM_MODE_TRANSPORT: missing
- CONFIG_VXLAN: enabled (as module)
Optional (for encrypted networks):
- "ipvlan":
- CONFIG_IPVLAN: missing
- "macvlan":
- CONFIG_MACVLAN: missing
- CONFIG_DUMMY: missing
- "ftp,tftp client in container":
- CONFIG_NF_NAT_FTP: enabled
- CONFIG_NF_CONNTRACK_FTP: enabled
- CONFIG_NF_NAT_TFTP: missing
- CONFIG_NF_CONNTRACK_TFTP: missing
- "overlay":
- Storage Drivers:
- "aufs":
- CONFIG_AUFS_FS: enabled (as module)
- "btrfs":
- CONFIG_BTRFS_FS: enabled (as module)
- CONFIG_BTRFS_FS_POSIX_ACL: missing
- "devicemapper":
- CONFIG_BLK_DEV_DM: enabled
- CONFIG_DM_THIN_PROVISIONING: enabled (as module)
- "overlay":
- CONFIG_OVERLAY_FS: enabled (as module)
- "zfs":
- /dev/zfs: missing
- zfs command: missing
- zpool command: missing
- "aufs":
Limits:
- /proc/sys/kernel/keys/root_maxkeys: 1000000`
Any suggestions?
Forwarded your comments to the VCAC-A dev team. As a workaround, do you have to install weavenet? If you have a 1-node setup that Kubernetes master is on your VCAC-A host, you can skip the WeaveNet setup. Alternatively, you can probably use the VCAC-A bridge network mode. This does not require to use WeaveNet either.
Forwarded your comments to the VCAC-A dev team. As a workaround, do you have to install weavenet? If you have a 1-node setup that Kubernetes master is on your VCAC-A host, you can skip the WeaveNet setup. Alternatively, you can probably use the VCAC-A bridge network mode. This does not require to use WeaveNet either.
If I skip the weavenet setup, the vcac worker node is always in NotReady State and I am not sure if I will be able to schedule pods on the same. What is VCAC-A bridge network mode? Can you give more details? Will it allow me to use VCAC as worker node in my cluster.
Btw, just to share more findings, I skipped weavenet installation on vcac node and did the weavenet installation using weave-net yaml via kubectl apply -f weave-net.yaml command, Now this shows two weave pods running on my cluster, but the one which is running on VCAC worker node is in State "CrashLoopBackOff".
Checking the container logs at VCAC in /var/log/contatiner/
Not sure if you can use flannel as alternative solution.
Example: $ wget https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml $ sed -i “s/50Mi/100Mi/g” kube-flannel.yml $ kubectl apply -f kube-flannel.yml $ kubectl taint nodes --all node-role.kubernetes.io/master- (if K8S master is in the VCAC-A host)