Post your video card FPS here!
It's interesting to see what kind of frame rate (FPS = Frames Per Second) we can expect from our video cards. So if your card is not already listed here, or if your software versions or OS is very different from what's already here, please post a comment with relevant table info, and I'll add it here.
Expected Video Card FPS table
| Video Card Name | VRAM* | FPS | OS | Py | PyTorch | Driver** |
|---|---|---|---|---|---|---|
| Nvidia GeForce RTX 3070 | 8 GB | 33 | Win10 Pro | 3.8.5 | 1.7.1 | |
| Nvidia GeForce RTX 2080 | 21 | |||||
| Nvidia GeForce RTX 2070 | 8 GB | 23 | Win10 | 3.7.6 | 1.0.0 | 441.22 |
| Nvidia GeForce RTX 2060 | 8 GB | 19 | Win10 | 3.7.5 | 1.0.0 | |
| Nvidia GeForce RTX 2060 | 6 GB | 15 | Win10 | 3.7.5 | 1.0.0 | |
| Nvidia GeForce GTX 1650 | 4 GB | 11 | Win10 | 3.7.x | 1.0.0 | |
| Nvidia GeForce GTX 1650M | 4 GB | 18 | Arch Linux | 3.7.7 | 1.0.0 | |
| Nvidia GeForce GTX 1650 Ti | 8 GB | 21 | Linux Mint 20 | 3.7.9 | 1.7.1 | 450.102 |
| Nvidia GeForce GTX 1660 SUPER | 6 GB | 25 | Ubuntu 20.04 | 3.7.9 | 1.0.0 | |
| Nvidia GeForce GTX 1070 | 7 GB | 15 | ||||
| Nvidia GeForce GTX 1070 | 8 GB | 28 | Manjaro 20.0.1 | 3.7.7 | 1.0.0 | |
| Nvidia GeForce GTX 1060 | 6 GB | |||||
| Nvidia GeForce GTX 1050ti | 4 GB | 7 | Win10 | 3.7.x | 1.0.0 | |
| Nvidia GeForce GTX 950 | 2 GB | 9 | Win10 | 3.7.7 | 1.0.0 | |
| Nvidia GeForce GTX 860M | 4 GB | 5+ | Ubuntu 20.04 | 3.7.7 | 1.0.0 | |
| Nvidia GeForce GTX 850M | 4 GB | 3+ | Win8.1 | 3.8 | 1.5.0 | |
| Nvidia GeForce GTX 850M | 4 GB | 5+ | Win8.1 | 3.7.7 | 1.0.0 | |
| Nvidia GeForce GTX TITAN | 4 GB | 7+ | Win10 | 3.7.7 | 1.0.0 | |
| Nvidia Jetson TX2 | 25 |
*VRAM is you cards video RAM in GB. **Driver is your nVidia Driver Version
To get Video card info:
# For Windows Powershell, use:
(Get-CimInstance -ClassName CIM_VideoController) |Select-Object Name, @{Name="VRAM"; Expression={[math]::round($_.AdapterRAM/1GB, 0)}},DeviceID,AdapterDACType
# For Windows CMD, use:
wmic path win32_VideoController get name
# For Windows/*nix with Nvidia tools in the Path, use:
nvidia-smi.exe -q
nvidia-smi.exe --format=csv --query-gpu=name,"memory.total",pstate,count,driver_version
# For Linux (general), use:
lspci -v
sudo lshw -numeric -C display
# This require the "mesa-utils" package, but may not be available on headless cloud installs.
glxinfo
You can also search this website for your exact video card, although it may vary sometimes with HW revisions, not easily found.
To get your active Python and PyTorch version, make sure to activate the conda environment as the python version may be different, in different environments.
conda activate avatarify
python -V
pip show torch
Video Card Name (nvidia geforce rtx 2060 ) VRAM (8 GB ) FPS (19) OS (Win10 home) Python (3.7.5 ) PyTorch (1.0.0 )
@E3V3A it's impossible for 1060 to run 40+ FPS, that's most probably was "40ms" model inference time. I also doubt 1070 is able to perform at 30 FPS, I have 15 on mine.
@alievk
it's impossible
Good to know. I got all those numbers from random posts in the slack channels...
Video Card Name (nvidia geforce rtx 2060) VRAM (6 GB ) FPS (15,8) OS (Win10 Pro) Python (3.7.5 ) PyTorch (1.0.0 )
Video Card Name (nvidia geforce gtx 1070) VRAM (8 GB) FPS/Model/Pre/Post (28/27.5/1.5/5.0) (seems too high for a 1070) OS (Manjaro 20.0.1) Python (3.7.7) PyTorch (1.0.0)
Video Card Name (NVIDIA GeForce GTX TITAN) VRAM 4GB FPS 7.8 OS Windows10 Python 3.7.7 Pytorch 1.0.0
Video Card Name: Nvidua geforce GTX 860M FPS 5.8 OS Ubuntu Budgie 20.04 Python 3.7.7 Pytorch 1.0.0
Video Card Name: None (Processor: i7 5557U) FPS: 0.3 OS: Macos Catalina 10.15.4 Python 3.7.7 PyTorch 1.5.0
Video Card Name: Nvidia GTX 950 VRAM: 2GB FPS: 9.1 OS: Windows 10 Python 3.7.7 PyTorch 1.0.0
I'm not sure how much of a difference the video card makes. You can see just in these comments that it varies wildly even amongst similar cards, and while my GTX 950 got ~9 FPS, I noticed it was only under 20% load. Something doesn't seem to be optimized.
@clippycoder
my GTX 950 got ~9 FPS
Actually that sound even too much. For a videocard with only 2GB, that is awesome!
Can you please check again and post the nvidia-smi -q output.
Don't worry about the load. Win10 does super shit confusing job in showing GPU usage in the taskmanager. I believe the 20% you see is an artifact of task manager averaging over a number of CUDA cores or processing units. So to see true usage you need to run the following command:
nvidia-smi --query-gpu=pci.bus_id,pstate,temperature.gpu,utilization.gpu,utilization.memory,memory.free,memory.used --format=csv -l 5
This should keep updating while you're running.
@E3V3A I'm quite sure it's a GTX 950, having purchased it 'bout a week ago. But, I double checked for you, and yes, it's still a GTX 950, still 2GB, with about 9 FPS. I'd say I'm pretty satisfied. I wouldn't call the output smooth by any measure, but still acceptable for videoconferencing. And thanks for the tip about GPU usage, from the command prompt, it actually appears to be about 70-80%; much more realistic. Task manager reads 15%.
Video Card Name: GeForce GTX 1650 Mobile VRAM: 4GB FPS: 18 OS: Arch Linux Python 3.7.7 PyTorch 1.0.0
Video Card Name: Geforce GTX 2070 (Remote GPU in my local network) VRAM: 8GB FPS 19 without CUDA, 23 with CUDA OS: Win10 Python: 3.7.6 PyTorch: 1.0.0
Video Card Name: Geforce GTX 2070 (Remote GPU in my local network) VRAM: 8GB FPS 19 without CUDA, 23 with CUDA OS: Win10 Python: 3.7.6 PyTorch: 1.0.0
Did CUDA installator update the driver? Installing CUDA alone in the system is not supposed to affect the performance..
@E3V3A I think mentioning NVIDIA driver could be informative in these reports.
@alievk I believe the CUDA did update the driver, and agreed the Nvidia driver version may be a performance factor.
@alievk When you say:
NVIDIA driver
Do you mean the version or some other part?
@QingpingMeng
Video Card Name: Geforce GTX 2070 (Remote GPU in my local network) FPS 19 without CUDA, 23 with CUDA
This doesn't quite make sense, please post the output of the command requested. Especially `nvidia-smi".
@E3V3A name, memory.total [MiB], pstate, count, driver_version GeForce RTX 2070, 8192 MiB, P8, 1, 441.22
@alievk When you say:
NVIDIA driver
Do you mean the version or some other part?
I mean driver version
Is there a control over working resolution of the video stream, e.g. to gain more fps with video lower bitrate?
Is there a control over working resolution of the video stream, e.g. to gain more fps with video lower bitrate?
Right now it requires ~22kbps which is quite democratic. The code in the master branch is not optimal since it communicates messages in a synchronous fashion.
I have an asynchronous solution in feat/colab-mode branch, which doesn't drop FPS for the network lag, i.e. if your server is running 30FPS then your client will update the image 30FPS but with a lag.
I'll merge branch to master lately when it's tested and tuned.
Video Card Name: Nvidia Jetson Nano (128 Cuda Cores) VRAM: 4GB LPDDR4 system total (no dedicated VRAM) FPS: 0.8 OS: Ubuntu 18.04 Python: 3.6.9 PyTorch: 1.5.0
Makes me wonder if that Jetson TX2 entry is correct.
Video Card Name: GeForce RTX 2070 (driver 440.82, CUDA 10.2) VRAM: 8GB FPS: 34 (32-36) OS: Linux Mint 19.1 Cinnamon (kernel 4.15.0-101-generic) Python: 3.7.7 PyTorch: 1.0.0
With 68% and 22% GPU utilization and memory, respectively, reading from @E3V3A command suggestion:
nvidia-smi --query-gpu=pci.bus_id,pstate,temperature.gpu,utilization.gpu,utilization.memory,memory.free,memory.used --format=csv -l 5
Gigabyte RTX 2070 super VRAM 8gb FPS: 32 (but sometimes 20-30) OS: win10 Python: 3.7.7 Pytorch: 1.0.0
NVIDIA Quadro T1000 (Laptop) VRAM 4gb FPS: 10 OS: win10 Python: 3.7.7 Pytorch: 1.0.0
GeForce GTX 1660 SUPER (1408 cuda cores) VRAM 6GB FPS ~25 Ubuntu 20.04 Python 3.7.9 Pytorch 1.0.0
GPU: GeForce GTX 970 (OC) VRAM: 4GB (3.5GB + 0.5GB) FPS: 20 OS: Ubuntu 20.04 Python (docker): 3.6.9 Pytorch (docker): 1.0.0 Python (host): 3.8.3
I'm a bit puzzled as to why I should get better FPS than the 1070... Could the webcam resolution have an effect?
You only got higher than one 1070. The other one was 15