Results 7 comments of 心流

> This takes 3 times longer to process the same video then on the GUI version. > > I used the UVR-MDX-NET-Inst_Main model and the same test video (2:42 long)...

I got an error when I tried: https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/whisper/README.md#distil-whisper My command was: !trtllm-build --checkpoint_dir distil_whisper_medium_en_weights_int8/encoder \ --output_dir distil_whisper_medium_en_int8/encoder \ --paged_kv_cache disable \ --moe_plugin disable \ --enable_xqa disable \ ---max_batch_size 8 \...

The error report that occurred reads: 2024-08-20 07:45:07.071785: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-08-20 07:45:07.092394:...

> https://www.reddit.com/r/MLQuestions/comments/1ee5a89/finetuning_an_llm_runtimeerror_some_tensors_share/ > > 我看了这个帖子down grade了一些packages,然后如果还报其他错就把报错的升级,就能用了。根据我的实验应该是 torch或者torchdata版本的问题 谢谢解答,能不能把你的环境配置发我一下,包的版本太多了,一个一个试错太费时间了

> peft==0.12.0 transformers==4.44.0 torch==2.4.0 torchdata==0.5.1 loralib==0.1.1 peft==0.12.0 get,谢谢~

I got an error when I tried: https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/whisper/README.md#distil-whisper My command was: !trtllm-build --checkpoint_dir distil_whisper_medium_en_weights_int8/encoder --output_dir distil_whisper_medium_en_int8/encoder --paged_kv_cache disable --moe_plugin disable --enable_xqa disable ---max_batch_size 8 --gemm_plugin disable \ --bert_attention_plugin float16 --remove_input_padding...

> You need a newer version of Argos Translate and CTranslate2. CTranslate2 models aren't forward compatible My CTranslate2 has been upgraded to v2.24.0, but it still reports errors. If I...