benchmark icon indicating copy to clipboard operation
benchmark copied to clipboard

nits for running benchmark models locally

Open wconstab opened this issue 5 years ago • 0 comments

moved from original issue in https://github.com/pytorch/hub/issues/148 on behalf of @zdevito

I am working on python packaging for PyTorch and just used the benchmark models to verify that the packaging approach would work and not get caught up in the complexity of existing models. I used the ability to loop through the models to get a handle on the torch.nn.Module for each one, saved it in the package, and reloaded it. It illustrated a lot of shortcomings of my initial code and I was able to quickly try fixes and see how they would work. Pretty cool! I think the benchmark suite is going to be really useful for these type of design questions in addition to purely perf improvements. Thanks for helping to put it together.

As part of the process of using the benchmarks, I ran into a few nits in some of the benchmarks that only got uncovered when trying to use things locally.

Couldn't easily work around Background-Matting - does not work on my local machine

  • [x] has hard-coded circleci paths,
  • [x] expects local directory to be a specific value

tacotron2

  • [ ] requires a GPU even if you ask for a cpu model, because it calls .cuda in load_model()

Require workarounds Overall

  • [ ] the use of sys.path modifications to load files means that error messages are confusing:

e.g. (File "hubconf.py", line 74, in init, which hubconf.py is that?). Would be to treat models/ as part of the path and to load the submodules from there. BERT-pytorch

  • [x] expects local directory to be a specific value

attention-is-all-you-need-pytorch

  • [x] expects local directory to be a specific value

fastNLP

  • [x] expects local directory to be a specific value

demucs

  • [ ] get_module does not return a torch.nn.Module (returns a lambda)
  • [ ] doesn't do anything with the jit flag (should throw if it is not supported)
  • [ ] puts ScriptModule annotation on model, but doesn't actually script the model

moco

  • [ ] default device is set to 'cuda' but the runbook specifies the default device is 'cpu',

causes model to fail in unexpected way when cuda is not installed

wconstab avatar Sep 02 '20 04:09 wconstab