刘昊
刘昊
Traceback (most recent call last): File "train.py", line 173, in main() File "train.py", line 155, in main coco_eval.evaluate_coco(dataset_val, retinanet) File "/home/liuhao/seedland/retina/pytorch-retinanet/retinanet/coco_eval.py", line 73, in evaluate_coco coco_eval = COCOeval(coco_true, coco_pred, 'bbox')...
this is my code for generate image,but the generated img is random。 prior model: https://huggingface.co/laion/DALLE2-PyTorch/blob/main/prior/best.pth decoder model: https://huggingface.co/laion/DALLE2-PyTorch/blob/main/decoder/1.5B/latest.pth ``` import torch from dalle2_pytorch import DiffusionPrior, DiffusionPriorNetwork, OpenAIClipAdapter from dalle2_pytorch import...
use cosine_distance or euclidean_squared_distance will get the similar evaluation results?
Re Train
i use coco2017 and your pretrained model dla34.pth to train again,the loss is still growing from 3.1 to round 17.why the loss does keep unchanged or change smaller,
when i load efficientdet model,it loaded a darnet model
how many iterations of the vgg model you provided,I retrained the vgg model with your code about 25 iterations,it does not look good.
the Motorbike category has 6 parts,but the num_seg_classes is 4 for shapenet dataset https://github.com/fxia22/pointnet.pytorch/blob/master/utils/train_segmentation.py#L60