Myung-Joon Kwon
Myung-Joon Kwon
Cuz there was no DCT pretrained weights file. You don't need it at inference because the full model weights file is used instead.
Hi, Since you introduced the new architecture, you should pretrain it yourself with ImageNet 😄
The default setting treats all tamperings the same (binary segmentation). You may try to change the gt mask and model and retrain. I haven't tried it.
Did you use the same environment as described in requirements.txt? RAM linearly increases at the beginning and stops at a certain point.
Thanks for your report. I'm almost certain that I'll replace jpegio to another package in future work. I think jpegio is not stable enough. If anyone knows another library that...
Sorry, I'm not an expert on Visual Studio. Please ask the authors of JPEGIO.
Well, inference uses batch size of one, so BATCH_SIZE_PER_GPU has no effect. Image size and GPU memory size are the only things that make OOM. This means you should reduce...
In train.py: TRAIN.BATCH_SIZE_PER_GPU and TRAIN.IMAGE_SIZE determine the batch size and image crop size, respectively. In infer.py: Those are ignored. Batch size is fixed to 1 and image size becomes the...
The binary evaluation was done using threshold 0.5. We also used Average Precision to test the model with all thresholds.