dockerfile
In case anybody has issue with dependencies. I created a dockerfile for your convenience.
FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu18.04
ENV DEBIAN_FRONTEND noninteractive
# Install basic dependencies
RUN apt-get update && apt-get install -y \
curl \
build-essential \
git \
software-properties-common \
ffmpeg \
libsm6 \
libxext6 \
libgl1 \
ninja-build \
unzip
# This step must be completed before setting a different python (3.8) as system default
RUN add-apt-repository ppa:ubuntu-toolchain-r/test
RUN apt-get update && apt-get install -y gcc-11 g++-11
RUN update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 90 \
--slave /usr/bin/g++ g++ /usr/bin/g++-11
RUN update-alternatives --config gcc
# Add deadsnakes PPA and install Python 3.8
RUN add-apt-repository ppa:deadsnakes/ppa && \
apt-get update && \
apt-get install -y python3.8 python3.8-dev python3.8-venv python3.8-distutils
# Make Python 3.8 the default
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
# Install UV
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
# Install PyTorch with CUDA support using Python 3.8
RUN uv pip install --python python3.8 torch==2.1.0 --index-url https://download.pytorch.org/whl/cu118 --system
# Install other dependencies
RUN uv pip install --python python3.8 setuptools wheel --system
RUN uv pip install --python python3.8 \
opencv-python==4.8.1.78 \
trimesh==3.23.5 \
open3d==0.17 \
pyvista==0.42.3 \
scipy==1.10.1 \
scikit-image==0.21.0 \
pyhocon==0.3.59 \
pyexr==0.3.10 \
tensorboard==2.14.0 \
icecream==2.1.3 \
PyMCubes==0.1.4 \
pyembree==0.2.11 \
--system
# install tcnn, set TCNN_CUDA_ARCHITECTURES to your compute platform.
# Your compute arch can be obtained by the following command
# unfortunately - I have not found a way to automatically find this during the docker build stage
# nvidia-smi --query-gpu=compute_cap --format=csv | tail -1 | sed "s#\.##g"
RUN PATH="/usr/local/cuda/bin:${PATH}" \
LIBRARY_PATH="/usr/local/cuda/lib64/stubs:${LIBRARY_PATH}" \
TCNN_CUDA_ARCHITECTURES="86" \
CXXFLAGS="-std=c++17" \
uv pip install --no-build-isolation --python python3.8 git+https://github.com/NVlabs/tiny-cuda-nn/@2ec562e853e6f482b5d09168705205f46358fb39#subdirectory=bindings/torch --system
# Install nerfacc and other dependencies
# Note: This assumes the directory will be available in the Docker context
RUN git clone https://github.com/CyberAgentAILab/SuperNormal.git /supernormal
WORKDIR /supernormal
RUN uv pip install --upgrade pip setuptools wheel --system
RUN PATH="/usr/local/cuda/bin:${PATH}" \
LIBRARY_PATH="/usr/local/cuda/lib64/stubs:${LIBRARY_PATH}" \
uv pip install --python python3.8 \
# --config-settings build-backend=setuptools.build_meta \
--no-build-isolation -e ./third_parties/nerfacc-0.3.5/nerfacc-0.3.5/ --system
now, since there is no cuda runtime by default when building this docker container. You need to enable that according to https://stackoverflow.com/questions/59691207/docker-build-with-nvidia-runtime - like so (on ubuntu): Install nvidia-container-runtime:
sudo apt-get install nvidia-container-runtime
Edit/create the /etc/docker/daemon.json with content:
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
Restart docker daemon:
sudo systemctl restart docker
Finally, you can build the container using:
DOCKER_BUILDKIT=0 docker build . --tag supernormal
In case you want, you could connect to the container in vscode with dev-containers-extension using the following devcontainer.json
{
"name": "CUDA",
"image": "supernormal",
"runArgs": [
"--gpus=all"
],
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-vscode.cpptools",
"ms-python.vscode-pylance"
]
}
},
"workspaceMount": "source=${localWorkspaceFolder},target=/workspace,type=bind",
"workspaceFolder": "/workspace"
}