zzp_miracle
zzp_miracle
OK. BTW, @xuzhao9 can we move these eval function into model so that trace/script could trace the whole model, without changing model semantic. Just like https://github.com/pytorch/benchmark/commit/5b07f357eaed06ccda9e7283f838c11228755229 . I think this...
@davidberard98 OK, I got this, thanks! How about the other question, that to change the code to trace the whole evaluation model?
Yes, it could run, but some code run in script mode while other code run in eager mode, this could not get the true performance of the jit or some...
OK, I implement jit_callback function to enable jit now. I test use `torch.jit.optimize_for_inference(torch.jit.freeze(torch.jit.script(self.model.eval()), preserved_attrs=["n_classes"]))` would have a bad performance(39.47ms) than use `torch.jit.freeze(torch.jit.script(self.model.eval()), preserved_attrs=["n_classes"])`, which only spend 23.88ms in A10. >...
Related PR to torchbench repo: https://github.com/pytorch/benchmark/pull/1045 , https://github.com/pytorch/benchmark/pull/1072, https://github.com/pytorch/benchmark/pull/1073
| pipeine | model | img size | unet-pytorch | unet-disc | e2e-pytorch | e2e-disc | | --- | --- | --- | --- | --- | --- | ---...
> @zzpmiracle Where I can find scripts to run this benchmarks? i've tried using it on the latest diffusers version, and it can't trace the components... maybe we can use...
TODO1, need to deal with fake tensors when tracing. torchdynamo use FakeTensorMode when export fx graph, this may raise Exception `Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode...