error: creating server: Internal - s3:// file-system not supported. To enable, build with -DTRITON_ENABLE_S3=ON.
Hello,
When I pass S3 path, I receive the following error:
error: creating server: Internal - s3:// file-system not supported. To enable, build with -DTRITON_ENABLE_S3=ON
The pod is based on the nvcr.io/nvidia/tritonserver:24.07-py3-igpu image and is running on the NVIDIA Jetson AGX Orin development kit. If possible, please provide another image.
Thank you in advance for your help.
Hello Nvidia team, If possible, could you please suggest the correct build command for setting up S3 support? I also noticed that the NCCL package is not included as well. To install it, I must use the command "sudo apt install libnccl2 libnccl-dev".
I was using the following command:
sudo python3 build.py \
--build-parallel 10 \
--no-force-clone \
--target-platform igpu \
--target-machine aarch64 \
--filesystem s3 \
--enable-gpu \
--enable-mali-gpu \
--enable-metrics \
--enable-logging \
--enable-stats \
--enable-cpu-metrics \
--enable-nvtx \
--backend onnxruntime \
--backend pytorch \
--backend tensorflow \
--backend python \
--backend tensorrt \
--endpoint http \
--endpoint grpc \
--min-compute-capability "5.3" \
--image "base,nvcr.io/nvidia/${IMAGE_NAME}:${OFFICIAL_MIN_IMAGE_TAG}" \
--image "gpu-base,nvcr.io/nvidia/${IMAGE_NAME}:${OFFICIAL_MIN_IMAGE_TAG}
Hi @shahizat , intersting, --filesystem s3 should ideally take care of S3. Is it possible to share a build log?
As for dependencies, to determine what dependencies are required by the build, run build.py with the --dryrun flag, and then looking in the build subdirectory at Dockerfile.buildbase [reference]
Hi @oandreeva-nv, thanks for your reply. I've finally been able to build it, Here is a link to my blog post: https://www.hackster.io/shahizat/triton-inference-server-on-nvidia-jetson-using-k3s-and-minio-cbcfe3