Zihui Wu

Results 5 comments of Zihui Wu

Hey guys, I have adjusted some code of the forward function in class NT_Xent and now it can work, but I just found the multi-gpu performance is mush worse than...

> Hey guys, I have adjusted some code of the forward function in class NT_Xent and now it can work, but I just found the multi-gpu performance is mush worse...

Same issue, and when I want to solve the problem ``` Error generating response: 'Qwen2ForCausalLM' object has no attribute 'load_lora' ``` by using: ``` from unsloth_zoo.vllm_utils import load_lora .... output...

> not sure if it helps, but i also tried this in above notebook and it reproduces the adapter outputs exactly :) Hi, can you share your versions of transformers...

This code works for me: ``` from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel import torch import argparse device = "cuda" if torch.cuda.is_available() else "cpu" # Load model and...