Yassine Benyahia
Yassine Benyahia
@quark0 did you manage to get this performance after this commit https://github.com/melodyguan/enas/commit/2734eb2657847f090e1bc5c51c2b9cbf0be51887 They actually fixed the evaluation and I can't seem to get below 63 in ppl which would make...
The new commit seems to sample only two activation functions because of the num_funcs=2. I think it should be set to 4.
Hi, Here is the sparse caffemodel : https://drive.google.com/file/d/0B64fJzg3TzZwSm9HeHNlT0hXSEk/view Actually, I retrained googlenet with a subsample of imagenet with sparsity : you can download the original here : https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet thanks, best
Yes I agree there should not be any difference. here is my training log [train.log](https://github.com/tidsp/caffe-jacinto/files/1440500/train.log) I am not using any quantization
and the prototxt I am using [googlenet_deploy.prototxt.docx](https://github.com/tidsp/caffe-jacinto/files/1440529/googlenet_deploy.prototxt.docx) The thing is when I run in caffe-jacinto an inference using this _iter_8000.caffemodel which is half the size of the original, I get...
Hi, I meant BVLC/caffe
you are right. The same thing happened in NVcaffe Thanks
Yes thank you manu
Hey manu, Thanks for your answer, Actually the error I get using pycaffe is the following TypeError: unhashable type: 'LayerParameter' Here is my code layer.quantization_param.qparam_w.bitwidth = 8 it is acutally...
Hello, I was indeed refering to the examples in the caffe-jacinto-models repo. Thank you for your quick answer, Best, Yassine