Efficient-CapsNet icon indicating copy to clipboard operation
Efficient-CapsNet copied to clipboard

Performance issues in /utils (by P3)

Open DLPerf opened this issue 4 years ago • 1 comments

Hello! I've found a performance issue in /utils: batch() should be called before map(), which could make your program more efficient. Here is the tensorflow document to support it.

Detailed description is listed below:

  • /pre_process_mnist.py: dataset_train.batch(batch_size)(here) should be called before dataset_train.map(image_rotate_random,num_parallel_calls=PARALLEL_INPUT_CALLS)(here), dataset_train.map(image_shift_rand,num_parallel_calls=PARALLEL_INPUT_CALLS)(here), dataset_train.map(image_squish_random,num_parallel_calls=PARALLEL_INPUT_CALLS)(here), dataset_train.map(image_erase_random,num_parallel_calls=PARALLEL_INPUT_CALLS)(here) and dataset_train.map(generator,num_parallel_calls=PARALLEL_INPUT_CALLS)(here).
  • /pre_process_mnist.py: dataset_test.batch(batch_size)(here) should be called before dataset_test.map(generator,num_parallel_calls=PARALLEL_INPUT_CALLS)(here).
  • /pre_process_smallnorb.py: dataset_train.batch(batch_size)(here) should be called before dataset_train.map(random_patches,num_parallel_calls=PARALLEL_INPUT_CALLS)(here), dataset_train.map(random_brightness,num_parallel_calls=PARALLEL_INPUT_CALLS)(here), dataset_train.map(random_contrast,num_parallel_calls=PARALLEL_INPUT_CALLS)(here) and dataset_train.map(generator,num_parallel_calls=PARALLEL_INPUT_CALLS)(here).
  • /pre_process_smallnorb.py: dataset_test.batch(1)(here) should be called before dataset_test.map(generator,num_parallel_calls=PARALLEL_INPUT_CALLS)(here).

Besides, you need to check the function called in map()(e.g., generator called in dataset_test.map(generator,num_parallel_calls=PARALLEL_INPUT_CALLS)) whether to be affected or not to make the changed code work properly. For example, if generator needs data with shape (x, y, z) as its input before fix, it would require data with shape (batch_size, x, y, z).

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

DLPerf avatar Aug 30 '21 06:08 DLPerf

Hi @DLPerf!

Thank you for your tip. However, I have a doubt: does inserting batch before all transformation functions reduce the variance of the dataset? I mean, doing as you're saying, all images of a batch go through the same transformation. On the other hand, keeping batch at the end ensures a different random transformation for each image.

EscVM avatar Sep 02 '21 13:09 EscVM