Seungwoo, Jeong

Results 4 comments of Seungwoo, Jeong

I'm experiencing the same thing on my WSL2 Ubuntu environment. When I run the code below, it restarts the kernel. I'm currently using an RTX 4090 with 24GB Vram, so...

I didn't use llama, but I got the same error when fine-tuning BLIP with LoRA. I checked the dtype of the parameters in all the layers and they were all...

I could run 7B model on google Colab environment with T4 GPU (Free GPU). 7B model is pretty light to use. But it takes many system RAM when loading so...

I'm curious about that too.