can use all gpu
hi support, in pod container, we request only 1 gpu device, but in reality, we can use all the 3 gpu device
in pod yaml file, request only 1 gpu resources: limits: cpu: "16" memory: 32Gi nvidia.com/gpu: "1" requests: cpu: "1" memory: 1Gi nvidia.com/gpu: "1"
but in jupyter, we can use all three gpu !nvidia-smi | grep Default
| N/A 62C P0 54W / 75W | 7369MiB / 7611MiB | 84% Default | | N/A 62C P0 49W / 75W | 7502MiB / 7611MiB | 88% Default | | N/A 58C P0 52W / 75W | 7504MiB / 7611MiB | 46% Default |
k8s version: v1.17.9 jupyter image version: tensorflow/tensorflow:1.15.5-gpu-py3-jupyter gpu device plugin image version: nvidia/k8s-device-plugin:v0.10.0
so how can we limit only 1 gpu as requested by yaml?
thanks majorin
@majorinche could you provide the complete pod spec?