MMMC
MMMC copied to clipboard
Official repository for Robust Multimodal Large Language Models Against Modality Conflict
Robust Multimodal Large Language Models Against Modality Conflict
Official repository for
Robust Multimodal Large Language Models Against Modality Conflict
🌟 Overview
This repository provides the code and dataset for our paper:
Robust Multimodal Large Language Models Against Modality Conflict.
- GitHub Repository: zmzhang2000/MMMC
- Hugging Face Hub: ustc-zhangzm/MMMC
- Paper: OpenReview PDF
📦 Multimodal Modality Conflict (MMMC) Dataset
The MMMC dataset is available on the Hugging Face Hub. You can easily download and use it as follows:
from datasets import load_dataset
dataset = load_dataset("ustc-zhangzm/MMMC")
Note: The dataset is generated by large language models and may contain some noise. We recommend using the dataset for research purposes only.
🚀 Improving the Robustness of MLLMs
We provide code for supervised fine-tuning and reinforcement learning to enhance the robustness of Multimodal Large Language Models (MLLMs) under modality conflict scenarios.
- Please follow the documentation for instructions on running the code.
- Detailed explanations of these methods are available in our paper.
📄 License
This dataset is distributed under the CC BY-SA 3.0 license.
📖 Citation
If you find this work helpful for your research, please cite our paper:
@inproceedings{
zhang2025robust,
title={Robust Multimodal Large Language Models Against Modality Conflict},
author={Zongmeng Zhang and Wengang Zhou and Jie Zhao and Houqiang Li},
booktitle={Forty-second International Conference on Machine Learning},
year={2025},
url={https://openreview.net/forum?id=SP43jVv7fJ}
}