ChartLlama-code
ChartLlama-code copied to clipboard
I do the inference, but i met an issue. Here is my inference script. ` export CUDA_VISIBLE_DEVICES=5 python -m llava.eval.model_vqa_lora \ --model-base liuhaotian/llava-v1.5-13b \ --model-path listen2you002/ChartLlama-13b \ --question-file playground/data/chartqa/test_aug.jsonl \...
Hello. Thank you for releasing all these amazing work. I am encountering a ClipVision issue. When I try to load the Lora weigths from ChartLlama-13b model, the "mm_vision_tower": "/mnt/share_1227775/yandali/multimodal/models/ft_local/clip-vit-large-patch14-336/" was...
Hello, I hope this message finds you well. I am reaching out to inquire about the possibility of publicly sharing a dataset consisting of generated code. Could you please provide...
please give more details about each parameter in readme, for example, the questions file, the answer file, what does they look like inside ? what is CHUNKS? etc.
请问ChartLlama官方的lora模型是没开源吗?
Hi, Thanks for the open sourcing! I want to reproduce the performance of chart-to-text and chart-to-table, and find that the evaluation scripts are released [here](https://github.com/tingxueronghua/ChartLlama-code/blob/main/scripts/v1_5/eval/chart_to_text.sh) and [here](https://github.com/tingxueronghua/ChartLlama-code/blob/main/scripts/v1_5/eval/derender_to_csv.sh) in this repo...
Hi, I read that you were cleaning the samples in December 2023, I was hoping that it should be completed by now. I was wondering if the complete dataset is...