HistomicsStream icon indicating copy to clipboard operation
HistomicsStream copied to clipboard

Evaluate performance

Open Leengit opened this issue 3 years ago • 2 comments

In particular, are we leveraging the graph execution optimizations (e.g., parallelization, memory management, GPU usage) of tensorflow and torch or do we need to do more to get that?

Leengit avatar Feb 03 '23 14:02 Leengit

@cooperlab says: look at TF MultiWorker strategy - https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras. We can help with this. Key questions are:

  • Can multiple workers run on 1 machine (example suggests so)
  • Can we allow each worker to identify their portion of the "plan" based on their worker index
  • How cumbersome is this - is a convenient wrapper similar to WrappedModel possible for users?

Leengit avatar Feb 03 '23 17:02 Leengit

Tensorflow does autosharding so we shouldn't have to explicitly shard the tensorflow.Dataset. We could add convenience functions so that the the likes of global_batch_size = num_workers * batch_size_per_worker are satisfied.

If the user has already created a model, and we want to convert or wrap that model so that it is as if the model had been created within a with strategy.scope(): Python block for some distributed strategy, could we do that after the fact? It might work to write the model to disk, and then read it back in within a strategy scope block; I have queried StackOverflow for other possibilities.

Leengit avatar Feb 21 '23 20:02 Leengit