LogicCheckGPT
LogicCheckGPT copied to clipboard
[ACL 2024] Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models. Detect and mitigate object hallucinations in LVLMs by itself through logical closed loops.
Results
2
LogicCheckGPT issues
Sort by
recently updated
recently updated
newest added
Hi @Hyperwjf, Thanks for sharing your great work! I am trying to reproduce the results based on LLaVA-1.5-7B on POPE reported in the paper. The reproduced results are as follows:...
A nice work. I would like to ask a question about LURE. LURE needs to mask the object during inference and then correct it. However, POPE and MME are discriminant...