Blake
Blake
Try replacing `page.type("input[name='username']", "test")` with `page.evaluate("document.querySelector(\"input[name='username'\").value = 'test';")`
Did that add_argument resolve the issue for you @Bgirish0 ?
Also, the code tends to hang here in utils.py truncate_left()
+1 Also curious about this
> @orrzohar yes, the model supports batching. For that you just have to pass the prompts as a list of strings, and also the list of visuals. Also you can...
think it'd be straight forward to swap the vicuna-7b for a llama-3-8b base? e.g. https://huggingface.co/lmms-lab/llama3-llava-next-8b
+1 would love to try out a qlora/lora fine tune
can't simply using huggingface+transformers b/c it's missing a preprocessor_config.json