mmvec icon indicating copy to clipboard operation
mmvec copied to clipboard

GPU Memory Allocation Limits

Open mwang87 opened this issue 6 years ago • 2 comments

There could be a command line option to set the max GPU memory allocation. By default tensorflow pre-allocates 100% of the GPU memory. If its not needed, then enabling the user to set the appropriate limit could help with GPU sharing. Example code for tensorflow is the following:

extra imports to set GPU options

import tensorflow as tf from keras import backend as k

###################################

TensorFlow wizardry

config = tf.ConfigProto()

Don't pre-allocate memory; allocate as-needed

config.gpu_options.allow_growth = True

Only allow a total of half the GPU memory to be allocated

config.gpu_options.per_process_gpu_memory_fraction = 0.1

Create a session with the above options specified.

k.tensorflow_backend.set_session(tf.Session(config=config)) ###################################

mwang87 avatar Jul 25 '19 18:07 mwang87

@mwang87 would definitely be curious to hear more of your thoughts on this.

Right now, all of the data is being loaded on the GPU to avoid data transferring (see this line.

This isn't ideal for very large datasets. I have a DataLoader built in pytorch that can handle larger datasets. But given that we are supporting tensorflow in the first version - I wonder how easy this would be to fix here.

mortonjt avatar Oct 01 '19 20:10 mortonjt

Personally have not done straight TF, in Keras you can incrementally define batches to shoot over to GPU.

mwang87 avatar Oct 01 '19 20:10 mwang87