ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

Windows Portable AMD, but with ROCm7.0

Open MakoLeep opened this issue 1 month ago • 1 comments

Custom Node Testing

Your question

Greetings. I have a weird question: Is it possible to use ROCm 7.0 instead of 7.1 in AMD Portable build for Windows? Can I actually modify it manually?

A little story: I have a cheap RX 7600 with 8Gb VRAM (I'm poor). A few months ago I was sitting on Ubuntu 24.04.3 LTS and was comfortly using ComfyUI with ROCm7.0.2 nightly build - everything worked absolutely stable with --lowvram or --novram params. ROCm7.1 was buggy for me on Ubuntu. Time passed and I had to reinstall my OS on the new drive. Once I did that I found out that repositories of radeon.com and pytorch.org are blocked in my country (lol, yes, can you imagine that?) and I literally cannot download anything from there. (Can't use paid VPN, I'm poor). Anyways, because of this I returned on Windows, and noticed that there is a AMD portable build exist. "Wow, that's so cool" - I thought, and tried out a few versions. Time to use it on Windows: Things with ROCm6.4 doesn't for me at all, sadly. Comfy with ROCm7.1 kinda works, and I can actually generate stuff as I used to on Linux, but... I get the same "bugs" as on Ubuntu - with --lowvram or --novram params generation is getting stuck on CLIPTextEncode node for reeaally long time. Since I have only 8gb of VRAM I can't use Normal VRAM, otherwise my generation speed drops down tenfold.

Logs


Other

No response

MakoLeep avatar Dec 26 '25 23:12 MakoLeep

Try navigating to your python_embeded folder, open a command prompt there by typing cmd+enter in the folder path bar at the top, run python.exe -s -m pip uninstall torch torchaudio torchvision and then run python.exe -s -m pip install --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ torch==2.9.1 torchaudio torchvision replacing the 2.9.1 with whatever version you're wanting.

I'm currently on a 7900xt (same gfx110X series) and have been having excellent performance with the 2.9.1 version. 2.10 and 11 are still heavily under development and are bug prone. I actually rolled back to 2.9.1 for that reason, but will test them again in the future when they are closer to actual release.

Also, since you're on AMD, edit your *.bat file you use to launch Comfy. This is what I'm personally using:

set TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1
set MIOPEN_FIND_MODE=FAST
set MIOPEN_LOG_LEVEL=3
set COMFYUI_ENABLE_MIOPEN=1
.\python_embeded\python.exe -s ComfyUI/main.py --use-pytorch-cross-attention --disable-smart-memory
pause

RandomGitUser321 avatar Dec 27 '25 00:12 RandomGitUser321

Try navigating to your python_embeded folder, open a command prompt there by typing cmd+enter in the folder path bar at the top, run python.exe -s -m pip uninstall torch torchaudio torchvision and then run python.exe -s -m pip install --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ torch==2.9.1 torchaudio torchvision replacing the 2.9.1 with whatever version you're wanting.

I'm currently on a 7900xt (same gfx110X series) and have been having excellent performance with the 2.9.1 version. 2.10 and 11 are still heavily under development and are bug prone. I actually rolled back to 2.9.1 for that reason, but will test them again in the future when they are closer to actual release.

Also, since you're on AMD, edit your *.bat file you use to launch Comfy. This is what I'm personally using:

set TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1
set MIOPEN_FIND_MODE=FAST
set MIOPEN_LOG_LEVEL=3
set COMFYUI_ENABLE_MIOPEN=1
.\python_embeded\python.exe -s ComfyUI/main.py --use-pytorch-cross-attention --disable-smart-memory
pause

Thank you, RandomGitUser321. Actually tried it and played with different versions - was able to fix the most issues. Also, pytorch-cross-attention works much better than quad-cross-attention I was using before. Cthulhu blesses you, the wise one.

MakoLeep avatar Dec 29 '25 09:12 MakoLeep