yoho131
yoho131
额,不好意思,我在阅读代码后能跑起来了,但因为我的数据集图像尺寸是5120*5120,如果resize为240*240效果太差。 所以我将train_seg.py和tesst_seg.py中的--img-cropsize和--img-resize设置为1024,运行后报错 RuntimeError: The size of tensor a (4097) must match the size of tensor b (226) at non-singleton dimension 1, 报错位置如下  其中x.shape为(1,4097,896),self.positional_embedding.shape为(226,896) 当我再次研究代码后知道了其中226(226=(240/16)^2+1)的源头是此处的image_size和patch_size,如果能将此处的image_size修改为1024,patch_size不变就能得到self.positional_embedding.shape为(4097,896)  所以应该在哪里修改image_size呢?我将此处修改为1024不起作用 
如果我直接在PromptAD/CLIPAD/transformer.py的此处直接将self.grid_size修改为(64,64)会对检测效果有什么影吗?我尝试修改后代码可以运行,但我不知道是否会对checkpoint的性能造成影响 
I have run gen_train_json.py and gen_val_json.py on my custom dataset to obtain the four JSON files shown in Figure 1. However, how can I obtain the Few-shot Normal Samples and...
Hello, after running main.py, I obtained the following ten .pyth files. However, when I tried to execute test.py on my custom dataset, I found that the Few-shot Normal Samples mentioned...
If you are using an Ubuntu system, it might be located here: /home/user/.cache/clip. Just copy it to the InCTRL-main directory. (At least that's how it worked for me.)