sccbhxc
sccbhxc
I run the following code: ``` # transductive 1-shot 5-way Omniglot. python -u run_omniglot.py --shots 1 --inner-batch 25 --inner-iters 3 --meta-step 1 --meta-batch 10 --meta-iters 100000 --eval-batch 25 --eval-iters 5...
I train the model using setting as provided in README.md: ``` python main_supcon.py --batch_size 1024 \ --learning_rate 0.5 \ --temp 0.5 \ --cosine --syncBN \ --method SimCLR ``` The loss...
Add new SOTA contrastive learning work on multi-grained tasks, including object detection, segmentation, keypoint detection.
+ **Paper 1** [Learning Compact Vision Tokens for Efficient Large Multimodal Models](https://arxiv.org/abs/2506.07138) Code: https://github.com/visresearch/LLaVA-STF + **Paper 2** [Diversity-Guided MLP Reduction for Efficient Large Vision Transformers](https://arxiv.org/abs/2506.08591) Code: https://github.com/visresearch/DGMR
We add our new pruning method for LLM. **Paper**: SDMPrune: Self-Distillation MLP Pruning for Efficient Large Language Models **Link**: https://arxiv.org/abs/2506.11120 **Code**: https://github.com/visresearch/SDMPrune