docker-gui icon indicating copy to clipboard operation
docker-gui copied to clipboard

A discussion on the topic of multi-channel access

Open yanbx opened this issue 4 years ago • 1 comments

Hello, set by the object of this topic, we realized the docker Shared local resources to display in the browser desktop, also my idea is that if a server hang up multiple video CARDS, if we use our existing method, may be no matter how many users, they are all Shared a channel graphics card, other graphics in a spare, this is due to the docker remote desktop, it USES xorg Shared a physical host, so if you want to realize my this kind of multi-channel, more video card this way, continue to use the docker is not feasible?Is OpenStack a better implementation, or is VMare a better one?

yanbx avatar Feb 27 '21 05:02 yanbx

Again I'm afraid that the translation software is making your questions difficult to understand, it might be better to ask one thing at a time. I'll try to answer what I think you are asking, but I might have got it wrong.

Are you asking about what happens if your server has multiple graphics cards?

if we use our existing method, may be no matter how many users, they are all Shared a channel graphics card

As it happens one of the advantages of using containers over VMs is that you maximise use of your resources by allowing all users to share the available resources. So yes if you have multiple virtual desktops running on a server then they will all share the same CPU and the GPU acting as the 3D X Server (the one bind mounted as Display :0).

With Virtual Machines you partition by allocating users virtual CPUs and memory to each VM and that is allocated irrespective of whether they are actually using it or not. With GPUs it tends to be hard to use in VMs with really really modern things I think they are starting to use SR-IOV https://www.networkworld.com/article/3535850/what-is-sr-iov-and-why-is-it-the-gold-standard-for-gpu-sharing.html but I'm not to familiar with that and it's quite new and only supported by a few cards. Mostly with VMs you'd use PCI pass through but in that situation you'd allocate a GPU per VM.

this is due to the docker remote desktop

Actually it has absolutely nothing to do with the docker remote desktop it is everything to do with how resources on the host are configured.

it USES xorg Shared a physical host, so if you want to realize my this kind of multi-channel, more video card this way, continue to use the docker is not feasible?

If what you are asking is whether it might be possible to utilise multiple graphics cards that might exist on your server then it is perfectly feasible to continue to use the Docker approach but you are going to have to do more work whether you use that approach or any other.

I think that you have two general options:

  1. If you have multiple graphics cards I believe that it is possible to run multiple X Servers. I think that you basically configure the second X Server to launch on a different TTY. I've never tried this but I came across this link https://www.linuxquestions.org/questions/linux-desktop-74/multiple-x-servers-multiple-graphics-adapters-single-seat-kind-of-tutorial-864646/ that might help. If you search for "Multiple Graphics cards multiple X Servers" you might find more advice. If you were able to stand up multiple 3D X Servers on your host then you'd need to modify the ubuntu.sh. At the moment it is hard coded to pass in display :0 as the 3D X server but it you have multiple 3D X servers you might want to make that configurable so some instances use on server and others another.

Another link https://groups.google.com/g/virtualgl-users/c/5B331QalCaI/m/KALUF_64AgAJ note that a comment in these says "In order to use VirtualGL with multiple GPUs, generally the easiest way to do it is to configure a single X server with multiple screens, so GPU 0 would be accessible by setting VGL_DISPLAY=:0.0 and GPU 1 would be accessible by setting VGL_DISPLAY=:0.1, etc."

I've also found this link which might be useful https://sourceforge.net/p/virtualgl/mailman/virtualgl-users/thread/[email protected]/ but to be clear it is not a topic that I am familiar with.

  1. You could run a VM per card,use PCI pass through and share the CPU and memory resources evenly between them. That way you could a VM per card (as opposed to per user) and run however many desktops on each VM to fully utilise resources.

I'd personally try the first approach as you are most likely to be able to fully utilise CPU and memory.

If you have a situation where every user is essentially using 100% of the GPU then the benefits of using a shared GPU are obviously reduced, but your options will depend on your requirements.

This is all somewhat out of scope for this repo though as it's mostly about how the host server is configured.

fadams avatar Feb 27 '21 09:02 fadams