JiamingLv
JiamingLv
I strictly follow the document for phi3_mini_4k_instruct_clip_vit_large_p14_336. **Run comand** NPROC_PER_NODE=4 xtuner train llava_phi3_mini_4k_instruct_clip_vit_large_p14_336_e1_gpu8_pretrain --deepspeed deepspeed_zero2 --seed 1024 **conda environment** python==3.10 transformers==4.41.1 torch==2.3.0 CUDA 12.1 4x3090 > 05/23 08:10:20 - mmengine...
Hi, this is really great work. Could you please provide the **training log** for F-ViT+CLIPSelf on OV-LVIS? Thanks!
### Motivation Hi, I’m trying to perform LoRA-based distillation on WAN2.1-1.3B using the following script: ``` #!/bin/bash #SBATCH --job-name=t2v #SBATCH --partition=main #SBATCH --nodes=8 #SBATCH --ntasks=8 #SBATCH --ntasks-per-node=1 #SBATCH --gres=gpu:8 #SBATCH...