Deterministic-ally get activation_index, fixed identation, added support for python3
Hi there, just wanted to say thank you for the blog post and the code example. I noticed that the function compute_rank in finetune.py is mutating a global state, namely grad_index to calculate activation_index.
See: https://github.com/jacobgil/pytorch-pruning/blob/7c3a5afe5c43869a9aad23b391878452b79fcb00/finetune.py#L73
While its fine for a single GPU, I noticed that it becomes non-deterministic while being pruned/trained on multiple GPUs.
This pull request solves that issue as well as added support for python3.
It's hard to explain, but here's a code snippet that explain what partial functions do.
def f(a):
def F(b):
return b + 5
return F
>>> fun = f(10)
>>> fun(3)
Oh whoops you are completely right, it should be registering the hook with the partial function and appending X to the activations, not the other way around. I should have slept before committing this. I'll change it when I have time, thanks.
fixed in https://github.com/jacobgil/pytorch-pruning/pull/4/commits/212f1b5b65d51be0b39a03d000516934aeb08486
Hello, thank you for the blog post and the code. I run your code but get some problem, "python finetune.py --train" shows that the test accuracy is about 50%, and the train accuracy is > 95%, I really don't know what's wrong with my implement, so ask for your help. And I consider maybe the data isn't loaded correctly, the test path is /../../test2, and folder "test" is in the folder test2, then the pictures are in folder test, is the data loaded correctly? I am new in python, Thank you in advance for your help.
Как по телефону вычислить хазяина,кто знает помогите 89635264714 вот этого гада!