Feature: Add SSH server to node image
As usernetes can be run on multiple hosts I would appreciate being able to do some automation via SSH (ansible) as you would do it on a "classic" k8s node. E.g. upgrading the control-plane would be such an automation task. Also ssh-ing directly into the container instead of first ssh into the host and then run nerdctl exec can save some time.
These additions to usernetes would be required:
- Append
openssh-serverto theapt-get installcommand in the Dockerfile - Add a volume
node-sshand mount it to/root/.sshin docker-compose.yaml to be able to persist authorized_keys - Add port forwarding e.g. 2222:22 in docker-compose.yaml
- Allow publickey root login with
echo "PermitRootLogin prohibit-password" >> /etc/ssh/sshd_config
Of course we could disable the SSH server by default, for those having security concerns or simply not requiring it, by removing the following service files in the Dockerfile:
rm /etc/systemd/system/sshd.service /etc/systemd/system/multi-user.target.wants/ssh.service
In this case we could add another make command to enable SSH at runtime like make enable-ssh.
I can create a PR, if this feature is accepted.
I'd rather suggest setting up the SSH server on the host, and use (docker|podman|nerdctl) exec for running your script inside the node container.
Sure you can, but then it doesn't behave like a real Kubernetes node and you can't do stuff like node orchestration with your "standard" ansible-playbooks. I believe the more a usernetes node behaves like a standard kubernetes node makes adoption of this more attractive. Otherwise it will be hard to drop a classic setup (actually separated by different VMs for each node in my case) in favor of usernetes.