aviadmx

Results 14 issues of aviadmx

As seen in the README there's a checkpoint for the model which detects faces: ![image](https://user-images.githubusercontent.com/95345571/179947934-be5bd06f-287a-4a5b-8ffd-ff42712b2be9.png) Also, I wonder if there's a checkpoint that detects faces and other objects (like cars,...

It seems that the multi gpu training and eval works great, however, while trying to debug you're opt for using a single gpu. In that case the code breaks in...

The model solves the panoptic segmentation task, why does the validation dataset uses the instance segmentation annotations? ``` data = dict( samples_per_gpu=2, workers_per_gpu=2, train=dict( type=dataset_type, ann_file= './datasets/annotations/panoptic_train2017_detection_format.json', img_prefix=data_root + 'train2017/',...

When loading the Swin-L checkpoint by adding a `load_from` line to the config `configs/panformer/panformer_swinl_24e_coco_panoptic.pyz` as following: ``` load_from='./pretrained/panoptic_segformer_swinl_2x.pth' ``` The loading fails with an error about keys mismatch: ``` unexpected...

## Bug Description EfficientNet example notebook does not compile to FP16 ## To Reproduce Steps to reproduce the behavior: Just open the EfficientNet notebook and try to run all. It...

bug

Whenever trying to load the new model I am getting: ``` --> 764 magic_number = pickle_module.load(f, **pickle_load_args) 765 if magic_number != MAGIC_NUMBER: 766 raise RuntimeError("Invalid magic number; corrupt file?") UnpicklingError:...

Hi, Has anyone managed to convert the model to FP16 (half precision) successfully? The encoder (ViT-H) outputs garbage in this case. Anyone managed to overcome this?

Hi, Tried to convert the pretrained model to TensorRT with either torch/TensorRT and torch2trt from Nvidia. Despite using the same precision (FP32) the model outputs differ significantly on large images....

Hi, I am interested in tweaking the (SD Gallery component)[https://github.com/Sygil-Dev/sygil-webui/tree/master/frontend/dists/sd-gallery/dist]), however I can only find the dist code of it and not the original code used to build it (as...

What's the license of the dataset?