Danning XIE
Danning XIE
Up. Same problem with jpg format
I have solved the problem. It is because my images contain both (112,96,3)s and (112,96, )s, which are one-channel images. I fixed it by using cv2 to read the image,...
I've encountered the same problem. However, is there any way to solve the problem without install keras version1.2? Can I use the code with keras 2 ? @duggalrahul
batch size : 64 number of batches : 20 number of GPUs: 2 The error I got: `InvalidArgumentError: Incompatible shapes: [64,2] vs. [128,2]` How can I deal with this?
Hi @mohantym, thanks for looking into this! I agree the input may not be perfectly valid. However, as a public API, it would be great to have the function kindly...
@vijaya-lakshmi-venkatraman I don't think the issue is fixed. At least the document has not been updated.
@vijaya-lakshmi-venkatraman Hi, it is fixed in the the master version. But in the [1.6]( https://mxnet.apache.org/versions/1.6/api/python/docs/api/mxnet/util/index.html#mxnet.util.use_np) document, it still exists. Should we consider this as fixed?
@vijaya-lakshmi-venkatraman It looks good now. I will close the issue.
@songdejia After tried your method, the problem solved. However, it seems that I cannot `import torch` anymore. Here is the error: `libgomp.so.1: version 'GOMP_4.0' not found`
I just found `tf.quantization.fake_quant_with_min_max_vars_per_channel` and `tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient` also aborts ~~~python tf.quantization.fake_quant_with_min_max_vars_per_channel(inputs=[], max=[], min=np.ones((0,1))) tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient(inputs=1, gradients=1, max=[], min=-1) ~~~