hallucination-mitigation topic
ANAH
[ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO
Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
HALVA
[ICLR 2025] Data-Augmented Phrase-Level Alignment for Mitigating Object Hallucination
uqlm
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
Deco
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Re-Align
[EMNLP'25] A novel alignment framework that leverages image retrieval to mitigate hallucinations in Vision Language Models.
Uncertainty-o
✨ Official code for our paper: "Uncertainty-o: One Model-agnostic Framework for Unveiling Epistemic Uncertainty in Large Multimodal Models".