Did not detect the .env file when using torchrun
when using python run.py --data ChartQA_TEST--model 7B --verbose,it goes fine but too much slow when using torchrun command like,torchrun --nproc-per-node=4 run.py --data ChartQA_TEST--model 7B --verbose, it comes with the following issue
W0423 17:11:07.226000 3089305 site-packages/torch/distributed/run.py:793] ***************************************** [2025-04-23 17:11:11] ERROR - misc.py: load_env - 212: Did not detect the .env file at /data3/xxf/VLMEvalKit/.env, failed to load. [2025-04-23 17:11:11] ERROR - misc.py: load_env - 212: Did not detect the .env file at /data3/xxf/VLMEvalKit/.env, failed to load. [2025-04-23 17:11:11] ERROR - misc.py: load_env - 212: Did not detect the .env file at /data3/xxf/VLMEvalKit/.env, failed to load. [2025-04-23 17:11:11] ERROR - misc.py: load_env - 212: Did not detect the .env file at /data3/xxf/VLMEvalKit/.env, failed to load. [2025-04-23 17:11:11,812] WARNING - RUN - run.py: main - 174: --reuse is not set, will not reuse previous (before one day) temporary files [2025-04-23 17:11:11] WARNING - run.py: main - 174: --reuse is not set, will not reuse previous (before one day) temporary files [2025-04-23 17:11:11] ERROR - misc.py: load_env - 212: Did not detect the .env file at /data3/xxf/VLMEvalKit/.env, failed to load. [2025-04-23 17:11:11] ERROR - misc.py: load_env - 212: Did not detect the .env file at /data3/xxf/VLMEvalKit/.env, failed to load. [2025-04-23 17:11:11] ERROR - misc.py: load_env - 212: Did not detect the .env file at /data3/xxf/VLMEvalKit/.env, failed to load.
could u help me to solve it,many thanks
I test with torchrun --nproc-per-node=1 run.py --data ChartQA_TEST --model Eagle-X5-7B --verbose and it works fine. Please check if the env file /data3/xxf/VLMEvalKit/.env exists
I test with
torchrun --nproc-per-node=1 run.py --data ChartQA_TEST --model Eagle-X5-7B --verboseand it works fine. Please check if the env file/data3/xxf/VLMEvalKit/.envexists
many thanks, and I have another question: Is the [VLM-EvalKit] suitable for evaluating models trained with LoRA? Specifically, I fine-tuned qwenvl using LoRA and want to test its performance. Would the toolkit support this scenario, or are there any compatibility considerations I should be aware of?
If you fine-tune a supported model using VLMEvalKit, evaluating it should be straightforward. You'll need to define your model, inherit from the base model architecture, and specify the path to your fine-tuned weights in vlmeval/config.py as follows:
qwen_series = {
"qwen_lora": partial(QwenVL, model_path="path/to/model_path"),
"qwen_base": partial(QwenVL, model_path="Qwen/Qwen-VL"),
...,
}
see config.py/qwenvl_series for details