usernetes icon indicating copy to clipboard operation
usernetes copied to clipboard

Feature: Add SSH server to node image

Open mueckinger opened this issue 7 months ago • 2 comments

As usernetes can be run on multiple hosts I would appreciate being able to do some automation via SSH (ansible) as you would do it on a "classic" k8s node. E.g. upgrading the control-plane would be such an automation task. Also ssh-ing directly into the container instead of first ssh into the host and then run nerdctl exec can save some time.

These additions to usernetes would be required:

  1. Append openssh-server to the apt-get install command in the Dockerfile
  2. Add a volume node-ssh and mount it to /root/.ssh in docker-compose.yaml to be able to persist authorized_keys
  3. Add port forwarding e.g. 2222:22 in docker-compose.yaml
  4. Allow publickey root login with echo "PermitRootLogin prohibit-password" >> /etc/ssh/sshd_config

Of course we could disable the SSH server by default, for those having security concerns or simply not requiring it, by removing the following service files in the Dockerfile: rm /etc/systemd/system/sshd.service /etc/systemd/system/multi-user.target.wants/ssh.service In this case we could add another make command to enable SSH at runtime like make enable-ssh.

I can create a PR, if this feature is accepted.

mueckinger avatar Aug 29 '25 08:08 mueckinger

I'd rather suggest setting up the SSH server on the host, and use (docker|podman|nerdctl) exec for running your script inside the node container.

AkihiroSuda avatar Sep 06 '25 03:09 AkihiroSuda

Sure you can, but then it doesn't behave like a real Kubernetes node and you can't do stuff like node orchestration with your "standard" ansible-playbooks. I believe the more a usernetes node behaves like a standard kubernetes node makes adoption of this more attractive. Otherwise it will be hard to drop a classic setup (actually separated by different VMs for each node in my case) in favor of usernetes.

mueckinger avatar Sep 06 '25 09:09 mueckinger