Rabiul Awal
Rabiul Awal
I found a solution, remove `torch_dtype`, and it should work fine! ``` model = Idefics2ForConditionalGeneration.from_pretrained( args.model_name, device_map="auto", low_cpu_mem_usage=True, quantization_config=bnb_config if USE_QLORA else None, ) ```
I understand that the `generate()` function is not behaving as intended! I did some profiling a few weeks ago.
Here's a processor that I wrote to make it work. ``` from LLaVA.llava.constants import DEFAULT_IM_END_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX from LLaVA.llava.conversation import conv_templates from LLaVA.llava.mm_utils import tokenizer_image_token class LlaVaProcessor: def __init__(self,...