Jay
Jay
Any updates on GPU support in your roadmap for 2023/2024?
I think it's this: https://stackoverflow.com/a/27502480/886314
Bump Experiencing the same. Wonder what is the causing the delay Edit: probably from https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L118
@teftef6220 can you tell us what changes are needed? Maybe we can help
This worked for me `conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=12.1 -c pytorch -c nvidia` I have 12.1 installed so I just bumped the version
> [https://github.com/LargeWorldModel/LWM/issues/7#issuecomment-1944919388](url) fixed URL: https://github.com/LargeWorldModel/LWM/issues/7
@discreet did you try using --recheck-s3 option?
I see. There was no `nvidia-smi` in that image. Looks like nvidia has not released any official docker image for debian: https://hub.docker.com/r/nvidia/cuda/tags/
@tareqAmen you can try setting `.vad_mode = VAD_MODE_4` in afe_config_t variable. It can go from 0-4 Default is 3 and 0 is most sensitive https://github.com/espressif/esp-sr/blob/455314a90cac59d4f50253cf719659f0b9f5d778/include/esp32s3/esp_vad.h#L30
@IVData you just have to install the package again