M2RAG
M
M2RAG
Overview :
M2RAG is a benchmark codebase for retrieval-augmented generation in multimodal contexts. It answers questions by retrieving multimodal documents, evaluating the ability of multimodal large language models (MLLMs) to leverage knowledge from multimodal contexts. The model is evaluated on tasks such as image captioning, multimodal question answering, fact verification, and image re-ranking, aiming to improve the effectiveness of models in multimodal contextual learning. M2RAG provides researchers with a standardized testing platform to help advance the development of multimodal language models.
Target Users :
M2RAG is suitable for scholars and developers engaged in multimodal language model research, especially those who wish to enhance the retrieval and generation capabilities of models in multimodal contexts. It provides researchers with a standardized testing platform to help them evaluate and improve the performance of multimodal large language models.
Total Visits: 474.6M
Top Region: US(19.34%)
Website Views : 51.6K
Use Cases
Researchers can use M2RAG to evaluate the performance of multimodal large language models in image captioning tasks.
Developers can use the code and datasets provided by M2RAG to quickly reproduce the experimental results of multimodal retrieval-augmented generation.
Companies can use the multimodal question answering functionality of M2RAG to develop intelligent customer service systems and improve user experience.
Features
Supports multimodal tasks, including image captioning, multimodal question answering, fact verification, and image re-ranking
Provides a multimodal retrieval-augmented instruction tuning (MM-RAIT) method to improve the performance of models in multimodal contextual learning
Compatible with various pre-trained models, such as MiniCPM-V 2.6 and Qwen2-VL
Provides a complete dataset and code implementation, facilitating researchers to reproduce and extend experiments
Supports both zero-shot and fine-tuning settings to suit different research needs
Provides detailed evaluation metrics for measuring the performance of generation tasks
Supports multimodal document retrieval, achieving efficient retrieval through technologies such as FAISS
Provides fine-tuning scripts for pre-trained models for easy user access
How to Use
1. Clone the repository: `git clone https://github.com/NEUIR/M2RAG`
2. Install dependencies: Install the required Python packages according to the `requirements.txt` file
3. Prepare the dataset: Download the M2RAG dataset or build your own according to the instructions, and place it in the `data` folder
4. Encode the test set queries and multimodal corpus: Run `script/get_embed_test.sh`
5. Retrieve the most relevant multimodal documents: Run `script/retrieval_test.sh`
6. Perform zero-shot inference using the retrieved documents: Run `script/inference_cpmv.sh` or `script/inference_qwen.sh`
7. For image re-ranking tasks, use `script/compute_ppl_minicpmv.sh` or `script/compute_ppl_qwen2vl.sh` for evaluation
8. Use the scripts in `src/evaluation` to evaluate the performance of the generation tasks
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase