Amd gpu support
Hello!
Can you add support amd gpu?
It seems to me that pytorch does not support ROCm (AMD GPU support) on windows
Hello
As stated by @Snowad14 it would not work on windows.
And I don't have AMD GPU, however, if you use Linux and your AMD GPU is supported (check https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package), I'll create an 'amd' branch so you can help to verify if it works. It seems we don't need to modify the source code to use amd GPU according to https://pytorch.org/docs/stable/notes/hip.html, you may verify if the current master branch works.
I'm also considering switching to other more deploy-friendly frameworks but that would require extra effort to work with new fancy models so that won't happen before next iter.
@dmMaze hmmm https://stackoverflow.com/a/63009844 https://github.com/microsoft/DirectML/issues/52#issuecomment-948948611
@dmMaze hmmm https://stackoverflow.com/a/63009844 microsoft/DirectML#52 (comment)
So it's pytorch-directml, you may verify if the current master branch works with win+amd+pytorch-directml.

Great, pytorch-directml doesn't share the same semantics with pytorch-cuda, I will try to make it compatible this week.
Delete BallonsTranslator\ballontranslator\data\config\config.json before you run the source code.
Some operators are not supported by pytorch_dml yet.
I would suggest you to use wslg+pytroch-ROCm
@dmMaze hello! About amd gpu, tensorflow working on mac os Can you add support?


https://developer.apple.com/metal/tensorflow-plugin/