Apex is not correctly built for pytorch 2.1.0
Describe the Bug Currently I want to build the latest apex from source for the nightly pytorch (2.1.0, commit id 3817de5d840bdff3f11ee23782494b5a13ae2001) . I run the following command
python3 -m pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
Although it ends with "Successfully installed apex-0.1", when I actually use it, it will raise ModuleNotFoundError: No module named 'fused_layer_norm_cuda'
I also try to git checkout 4e1ae43f7f7ac69113ef426dd15f37123f0a2ed3 on Apex and build with the same command. This building process takes much longer time than that on the current commit. So I guess this is the correct building process.
So what I want to ask is that how to build a newest apex for the nightly pytorch?
On my servers, solution below works for cuda 12.1 and does not work for cuda 11.7, I do not sure why
pip 23.3 have chaged build options syntax, so, library version should match $ pip install --upgrade pyyaml omegaconf setuptools hydra-core pip
my full build command line (apex for fairseq), or your build command $ pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" --config-settings "--build-option=--deprecated_fused_adam" --config-settings "--build-option=--xentropy" . --config-settings "--build-option=--fast_multihead_attn" ./
On my servers, solution below works for cuda 12.1 and does not work for cuda 11.7, I do not sure why
pip 23.3 have chaged build options syntax, so, library version should match $ pip install --upgrade pyyaml omegaconf setuptools hydra-core pip
my full build command line (apex for fairseq), or your build command $ pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" --config-settings "--build-option=--deprecated_fused_adam" --config-settings "--build-option=--xentropy" . --config-settings "--build-option=--fast_multihead_attn" ./
@vlesu could you share your env, the version of python, pytorch, cuda, os, etc? thanks