Harry Tsang
Harry Tsang
Although this is faster, one major bottleneck is still in VideoDataset. When inferring on a 4k HEVC video, around 80% of the execution time is spent on VideoDataset decode. Future...
Hi @goodnessshare I managed to improve the `inference_video.py` script's performance by over 3 times in my fork, making real-time HD video inference with encoding somewhat viable, but that is still...
My pull request is from [my public fork](https://github.com/h9419/BackgroundMattingV2) of this repository. I used a not very scientific methodologies of running the script without inference and finds that at least 80%...
@lzn1273180880 I think I have found the hardware limit for me in video inference using Nvidia's [VideoProcessingFramework](https://github.com/NVIDIA/VideoProcessingFramework). Video decoding can be done on the GPU and it is converted into...
@lzn1273180880 I have a few small but significant breakthrough. 1. Using both NVDEC NVENC in VPF make video encoding happen without any CPU memory involvement in the raw Tensors. 2....
> Although this is faster, one major bottleneck is still in VideoDataset. When inferring on a 4k HEVC video, around 80% of the execution time is spent on VideoDataset decode....
@MichaIng Same issue still persists in my NanoPi M1Plus hardware. Armbian and DietPi images prior to 2021 seemed to work. In 2021 I thought my hardware was faulty when I...
Update: image from vendor boots but only if I use a smaller SD card. My default 128GB SD card does not boot with any vendor bootable images but works fine...
It works on iOS simulator, but not on Android once deployed. Internet permission is missing in `android/app/sec/main/AndroidManifest.xml` Just need to add `android.permission.INTERNET` and set `android:usesCleartextTraffic="true"`
> We do plan porting to other platforms soon. We just want to first have one very solid implementation and CUDA happened to be the best/easiest to do it first....