mathpopo

Results 35 comments of mathpopo

Enabled the below code & it runs fine L.No 34 to 38 of /FastMaskRCNN/libs/layers/crop.py if batch_inds is False: num_boxes = tf.shape(boxes)[0] batch_inds = tf.zeros([num_boxes], dtype=tf.int32, name='batch_inds') batch_inds = boxes[:, 0]...

(base) chenxin@Nitro-AN515:~/Downloads/Ultra-Light-Fast-Generic-Face-Detector-1MB/ncnn/build$ ./main /home/chenxin/Downloads/Ultra-Light-Fast-Generic-Face-Detector-1MB/models/onnx-test/version-slim-320_simplified.bin /home/chenxin/Downloads/Ultra-Light-Fast-Generic-Face-Detector-1MB/models/onnx-test/version-slim-320_simplified.param /home/chenxin/Downloads/Ultra-Light-Fast-Generic-Face-Detector-1MB/imgs/3.jpg Processing /home/chenxin/Downloads/Ultra-Light-Fast-Generic-Face-Detector-1MB/imgs/3.jpg Segmentation fault (core dumped)

![Screenshot from 2020-06-04 11-56-40](https://user-images.githubusercontent.com/21274466/83713391-8c56fd00-a65a-11ea-80eb-c39c055a740d.png) ra-Light-Fast-Generic-Face-Detector-1MB$ python -m onnxsim /home/chenxin/Downloads/Ultra-Light-Fast-Generic-Face-Detector-1MB/models/onnx/version-slim-320.onnx /home/chenxin/Downloads/Ultra-Light-Fast-Generic-Face-Detector-1MB/models/onnx-test/version-slim-320_simplified.onnx Simplifying... Checking 0/3... Checking 1/3... Checking 2/3... Ok! compare exist version-slim-320_simplified.onnx and onnxsim result just use exist version-slim-320_simplified.onnx --> ncnn...

https://github.com/Gumpest/YOLOv5-Multibackbone-Compression/issues/19 But i download last week

git clone https://github.com/ZLkanyo009/MQBench.git cd MQBench python setup.py build python setup.py install i install MQBench0.02

Parameter containing: tensor([], size=(0, 32, 1, 1), requires_grad=True)

def __init__(self, c1, c2, c2o, n=1, shortcut=True, g=1, e=[0.5,0.5], rate=[1.0 for _ in range(12)]): # ch_in, ch_out, number, shortcut, groups, expansion super().__init__() # c_ = int(c2 * e) # hidden...

/YOLOv5-Multibackbone-Compression/models/common.py

i use titan xp ,can run well,this effect is in the v100,the two graphics cards are different?

disable cudnn for batchnorm layer: sed -i "1194s/torch\.backends\.cudnn\.enabled/False/g"?i do this already,use pytorch 0.4.1,cuda 9.0