Collin McCarthy
Collin McCarthy
Hello, Let's say I want to fine-tune the [swinv2_large_patch4_window12to24_192to384_22kto1k_ft](https://github.com/microsoft/Swin-Transformer/blob/main/configs/swinv2/swinv2_large_patch4_window12to24_192to384_22kto1k_ft.yaml) pre-trained checkpoint for a new task/resolution. It was already fine-tuned on ImageNet-1k using `PRETRAINED_WINDOW_SIZES: [ 12, 12, 12, 6 ]` which...
**Describe the feature** Add support for OneFormer in mmdetection. It looks like there is one open PR for this (#10714), one closed PR for this (#10661) and it's referenced in...
**Describe the bug** Semantic segmentations are not being accurately visualized with `DetLocalVisualizer`, leading to the same artifacts as are shown in [mmengine issue 741](https://github.com/open-mmlab/mmengine/issues/741) and below. I don't think the...
### What is the feature? I'm working on a project that requires me to try many combinations of models, datasets, tasks and augmentation pipelines. You can do this pretty well...
### What is the feature? Suppose you're training with checkpointing. You save a checkpoint every 10 epochs and run validation every 10 epochs. After the 10th epoch you save a...
### Describe the bug When my run finished training, it accidentally deleted a checkpoint (using wandb.Api()) that I didn't want it to. I always backup the `wandb-resume.json` file so I...
### Description Many clusters only support training jobs for a relatively short period of time, e.g. 4 hours, before they are interrupted/preempted and requeued, e.g. with `scontrol requeue` for a...
### Description I'm trying to call `wandb.finish(exit_code=0, quiet=True)` from within a sigterm handler. This allows me to choose my exit code, and ensure checkpoints finish uploading, when the run is...
### Describe the bug I'm adding Wandb to some detectron2 models for some baselines, and detectron2 names their config file `config.yaml`. I save this file with `wandb.save(config_yaml_path)` and it uploads...
### Describe the issue In [these lines](https://github.com/haotian-liu/LLaVA/blob/c121f0432da27facab705978f83c4ada465e46fd/llava/model/llava_arch.py#L316-L319) of `LlavaMetaForCausalLM.prepare_inputs_labels_for_multimodal()`, when we have padded the input we always need to return the padded attention mask. This is as simple as changing...