Why is the calculation result of tensorrt-llm version llava1.5 different from the output of HF?
System Info
- tensorrt-llm:0.9.0.dev2024022700
- GPU:L40S
- tensorrtl-llm docker
- driver:535.129.03
Who can help?
No response
Information
- [X] The official example scripts
- [ ] My own modified scripts
Tasks
- [X] An officially supported task in the
examplesfolder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below)
Reproduction
- Train the llava model using official llava code
- Convert the official llava model to the huggingface llava model
- Convert huggingface llava model to Tensorrt-llm benchmark(mmbench_cn): official llava as thruth,hf llava 99%(The output results of the hf model are close to 99.9% compared to the official model),llava-trt 93%。(The output of hf-lava in the following statement is taken as the true value) I randomly found an example where the output of tensorrt is different from the output of hf。trt-llava:B,hf-llava:A。 I tried the following centralized methods, but none of them solved the problem:
- Is it the calculation error of the clip model that caused the final result to be incorrect(no)?
- Hugginfface uses float32 to calculate the output result of visual_tower, and then converts it to float16 input of language_model through mmprojector.Language model's inputs_embeds not same,I use hf's inputs_embeds for tensorrt-llm,the output is still B not A of llava-hf。
language_model input is same:
hf output:
llava-trt:
Expected behavior
llm with same input will be same output:
'
actual behavior
trt-llm:output(B)[[ 1, 319, 13563, 1546, 263, 12758, 1404, 322, 385, 23116.......] (Pdb) output_ids[0,0,:2] tensor([ 1, 319], device='cuda:0', dtype=torch.int32) (Pdb) output_ids[0,0,input_lengths[0]:] tensor([350, 2, 2, ..., 2, 2, 2], device='cuda:0', dtype=torch.int32)
additional notes
hf:output(A)[tensor([319, 2], device='cuda:0')]
Due to different kernel selection and kernel implementation, it often generates different results. Unless there are obvious accuracy regression, we think it is reasonable.
@byshiue I have seen a significant decrease in the accuracy of the output results of TRT on my test set. I would like to know how you have determined that the output of TRT is reasonable.
We use mmlu and summarization task to evaluate. Could you try reproducing the accuracy on public model and public example, sharing your reproduced steps and let us be easier to reproduce your issue?
@byshiue Thank you very much for your reply. I'm sorry for the delayed reply. In the past few days, I have been trying to provide a Docker and minimum reproduction code. For the convenience of reproduction, I have upgraded to the latest TRT-LLM(0.11.0.dev2024051400). Now my previous code cannot produce results.
look like this #1632
temperature 0.0 is not a valid number in current TensorRT-LLM. Please use greedy search and don't set temperature directly, or set a very small temperature like 1e-6.
@byshiue oI modified temperature=1e-6 according to your statement, but I found that errors occur at all times except for the first inference that produces output,self.tokenizer.batch_decode(output_ids[0, :, input_lengths[0] :])will produce error:
like this #1299 。but my trt-version=0.11.
Thank you for the try. Could you share the full end to end steps to reproduce your issue? (how to convert ckpt, building engine and run the example)
@byshiue here
Could you share the end to end steps to reproduce your issue? It is hard for me to understand how to use the scripts you share.
@byshiue readme.md in zip.I just start webserver for llava-trt to process the result
temperature 0.0 is not a valid number in current TensorRT-LLM. Please use greedy search and don't set temperature directly, or set a very small temperature like 1e-6.
So how to enable greedy search?
Hi @bleedingfight do u still have further issue or question now? If not, we'll close it soon.