Jamie
Jamie
I think there are 2 ways to do this: 1. create a file like "misc/DataLoaderRaw.lua" as it in neuraltalk2 2. preprocess the images into an h5 file (using prepro_coco_test.py) and...
@wzn0828 In my opinion, the formulations of "ht" and "st" are similar, but they are affected by different variables when backpropagating loss, which results in their different effects.
I have a similar question, According to this paper, is the region feature of , and is a latent variable that denotes a specific image region. This means that only...
I think you can edit forward_dummy() in slowfast model to make it supprt softmax arg
@junaid340 Yes, you can make modifications in package files as a workaround. Or just do not pass the softmax arg forward_dummy() is only used when exporting onnx. Thus it goes...
@ZihengZheng https://github.com/zsdonghao/text-to-image/blob/275880d95b3d0366cbaefe24019c1decddc9d48c/train_txt2im.py#L207 the rnn_loss is set to be 0 after epoch 50
Same issue sometimes occured on my Ubuntu 16.04, when training other networks, the training process just got stuck at Epoch: [0].
```python import decord try: decord.VideoReader(fielname, num_threads=1) except: print(fielname) ``` works for me
after removing the images and using 11540 images to train the model, I got 0.7785 mAP with the default settings using model-ckpt-116604
where did you feed the input image?