[Feature] GPU passthrough support
Reasoning
I haven't extensively looked into how you handle Windows (VM?) setup, but saw that you're utilizing dockur/windows. As far as I saw, they support GPU passthrough by enabling access to /dev/dri for the container that manages the Windows VM.
I think this could be enough for hardware acceleration in something like Teams, but haven't been able to test this yet.
However, SR-IOV is available on Intel iGPUs (very common in laptops in a business setting, where Teams may be necessary). While this is unfortunately only true for some generations, there should be enough dual GPU (iGPU and dGPU) laptops to justify this. I know I would be more than willing to add another old Nvidia card that sips power to my desktop PC for this use case as well.
I'm not aware of how far AMD's virtio has come, so please excuse my ignorance. I will update this issue once I've been able to give that a try.
Request
That being said, it would be great to have WinBoat set up GPU passthrough. Both GVT-g and SR-IOV allow you to essentially split your existing GPU and pass through to a VM without losing access to it on the host. It's probably the closest we can come to the Docker way.
I still think it should be up to the user to set up virtualized GPU splitting, but being able to utilize them and pass them through to get hardware acceleration for video encode/decode would be a great. It's obviously not impossible to do, so having WinBoat maybe automatically recognize this and even suggest the usage of a virtualized GPU for VMs for certain applications (like Teams, Photoshop, etc), might make sense.
Do i remember this wrong that the additional gpu that is only used for passthrough VM's can actually be completely shut off and not "sip power". At least i thought i read this almost a decade ago when i had a box with 4 GPU's and vfio setup.
I managed to achieve passthrough of my secondary nvidia GPU (main is AMD) with help of this comment on the thread you mentioned, it needed a lot of workaround to achieve, especially on a motherboard that do not have a separate IOMMU group for the PCIe port for the GPU, in my case it didn't have it so i had to use the zen kernel with the ACS override patch to enable pseudo groups then pass the GPU, this is the general steps:
- Enable Zen Kernel
- Enable ACS override patch
- Check for IOMMU Groups, select the one with the GPU you want to passthrough
- Make sure you detach all the devices that are in that group from i915 and attach to vfio-pci, this will make the secondary GPU not usable on host, along with every other device in that group (this is why the patch is needed for some motherboards that have other important devices for the host in the same group)
- Pass everything that's in that IOMMU group to the guest via docker (see comment mentioned above)
- Install the GPU driver in windows (this will happen automatically sometimes)
This will work fine, but there's some security risks with the ACS override patch that I've read, see here I had trouble making sure the device stays attached to vfio-pci at first but i think that was my inexperience, I also think you do not have to keep the device attached to the vfio-pci, I think I saw somewhere that someone created a script that attaches and detaches the device automatically, this can partially solve the issue of using the device on host and guest, but not at the same time.
I do think most of this process can be automated with scripts but you have to keep in mind:
- This is a very rare use case IMO with too much steps and too much work arounds, may not be the best dev time use, also considering that there's other solutions that may be more viable and easier for GPU usage in windows docker (see this issue in the virtio-win repo)
- Switching kernel, enabling the patch, and enabling virtualization in bios, all are things that need some experience and manual intervention
Probably the best GPU passthrough support will be once this PR is finalized and matured. https://github.com/virtio-win/kvm-guest-drivers-windows/pull/943
Especially with this
Specifically this note Enabling SR-IOV support PF mode by default on supported platforms without needing to have built the Linux kernel with the CONFIG_DRM_XE_DEBUG option. This also includes enabling SR-IOV on the Xe driver when opting to use this modern driver with Tigerlake, Alder Lake, and Arctic Sound graphics rather than using the default i915 driver
So you have intel cpus 11th & 12th gen with the capabality of the iGPU to espose about 8 virtual functions that act just like another pci device , it support up to 8 if i remember correctlly , so you can have all of your VMs get accelerated , but i dont know if looking-glass can display more than one VM at the same time , that would be a looking-glass limitation.
Probably the best GPU passthrough support will be once this PR is finalized and matured. virtio-win/kvm-guest-drivers-windows#943
considering how this doesn't even implement vulkan/dx11-12 I think proper passthrough support is still a good idea.
Some like "VirtualBox 7.0" Use main machines DXVK-Native provide to guest Windows machines support DirectX8/9/10/11/12.
GPU Passtrough is a good idea on paper, but not better in pratice. Not everyone have two gpu in their PC, like me who have only one gpu with an mini-atx pc, it's impossible to passthrough without an proprer workaround. GPU Accelerations is a needs, sure but you should not always blame Winboat. FreeRDP have never being better either. Also, did you know you can use Gnome Connections trough your Windows instance? Well i did and it's suddently work by accident when i tried to call my other PC in Lan.
Anyways, we needs GPU Acceleration but not always using passtrough has an solution. If you look on WSL or WSA you may notice that Microsoft has able to Hardware Accelerate by using an bridge between VM <---> OS together. Maybe it's a good time to see his own source code and take inspiration.
Unreal Engine has GPU requirement to run it at all - it is impossible to create windows game build via linux native version
I hope if gpu passthrough is added, it would be dynamic vfio passthrough or dynamic gpu passthrough, as in you dont need to disable the gpu on your host system for it to work in a vm, (as in disable it, or blacklist, then restart). Example given, this was a script I used as a hook to dynamically detach the gpu from my running system:
if [ "$OPERATION" == "prepare" ]; then
#systemctl stop sddm.service
systemctl stop display-manager.service
# systemctl set-environment KWIN_DRM_DEVICES=${coreutils}
echo true > /tmp/kwin_drm_devices_flag
modprobe -r -a nvidia_uvm nvidia_drm nvidia nvidia_modeset
virsh nodedev-detach pci_0000_01_00_0
virsh nodedev-detach pci_0000_01_00_1
systemctl start display-manager.service
virsh net-start default
fi
if [ "$OPERATION" == "release" ]; then
systemctl stop display-manager.service
virsh nodedev-reattach pci_0000_01_00_0
virsh nodedev-reattach pci_0000_01_00_1
modprobe -a nvidia_uvm nvidia_drm nvidia nvidia_modeset
# systemctl start sddm.service
systemctl start display-manager.service
fi
This will restart your display manager e.g any ongoing desktop sessions, but there might be a way to get around that? the only thing that needs to happen is to ensure no process is using your GPU, so like on kde you would need to set KWIN_DRM_DEVICES, a way to do this dynamically, as well as to have a helper to see what processes are using your gpu so you dont need to restart your computer for when winboat adds support, it would be very useful to alot of people.
@shawny43 isn't the wsl bridge the equivalent of passthrough?