LWShowTime
LWShowTime
> I found that when commented out the line in /model/blip.py line 131 fix the problem: > >  > > Don't know why, hope someone can provide the detail...
I solved this problem, if the transformers is 4.16.0, everything is ok. But I used the transformers4.36.2, in this case, in the 818 lines of the generation_utils.py of transformers: you...
You guys can run the sample code? Where did you find the dataset?
My launch : --version=Mylocalpath/LISA/ --vision_tower=Mylocalpath/CLIP-vit-large-patch14/ --precision=fp16 --load_in_4bit

Actually, this is the bug of the deepspeed, I think you can avoid this by not using fp16. Try use bf16 ir fp32 @ZichengDuan
> @ZichengDuan Same situation as you, so I chose a GPU with 32G VRAM, and everything goes on well LOL. But the problem of dim1 > dim2 when using 4bit...
@shell-nlp Have been solved if you use a GPU with 32G VRAM.
 Is this warning about bfloat16 matters? In the last version I remember the inference is fast when segment an image which took only severl seconds. However, the new 13B...
@X-Lai And when I run version 2 in the multi-gpu devices. I got this error: `indices should be either on cpu or on the same device as the indexed tensor...