

M2RAG
Overview :
M2RAG is a benchmark codebase for retrieval-augmented generation in multimodal contexts. It answers questions by retrieving multimodal documents, evaluating the ability of multimodal large language models (MLLMs) to leverage knowledge from multimodal contexts. The model is evaluated on tasks such as image captioning, multimodal question answering, fact verification, and image re-ranking, aiming to improve the effectiveness of models in multimodal contextual learning. M2RAG provides researchers with a standardized testing platform to help advance the development of multimodal language models.
Target Users :
M2RAG is suitable for scholars and developers engaged in multimodal language model research, especially those who wish to enhance the retrieval and generation capabilities of models in multimodal contexts. It provides researchers with a standardized testing platform to help them evaluate and improve the performance of multimodal large language models.
Use Cases
Researchers can use M2RAG to evaluate the performance of multimodal large language models in image captioning tasks.
Developers can use the code and datasets provided by M2RAG to quickly reproduce the experimental results of multimodal retrieval-augmented generation.
Companies can use the multimodal question answering functionality of M2RAG to develop intelligent customer service systems and improve user experience.
Features
Supports multimodal tasks, including image captioning, multimodal question answering, fact verification, and image re-ranking
Provides a multimodal retrieval-augmented instruction tuning (MM-RAIT) method to improve the performance of models in multimodal contextual learning
Compatible with various pre-trained models, such as MiniCPM-V 2.6 and Qwen2-VL
Provides a complete dataset and code implementation, facilitating researchers to reproduce and extend experiments
Supports both zero-shot and fine-tuning settings to suit different research needs
Provides detailed evaluation metrics for measuring the performance of generation tasks
Supports multimodal document retrieval, achieving efficient retrieval through technologies such as FAISS
Provides fine-tuning scripts for pre-trained models for easy user access
How to Use
1. Clone the repository: `git clone https://github.com/NEUIR/M2RAG`
2. Install dependencies: Install the required Python packages according to the `requirements.txt` file
3. Prepare the dataset: Download the M2RAG dataset or build your own according to the instructions, and place it in the `data` folder
4. Encode the test set queries and multimodal corpus: Run `script/get_embed_test.sh`
5. Retrieve the most relevant multimodal documents: Run `script/retrieval_test.sh`
6. Perform zero-shot inference using the retrieved documents: Run `script/inference_cpmv.sh` or `script/inference_qwen.sh`
7. For image re-ranking tasks, use `script/compute_ppl_minicpmv.sh` or `script/compute_ppl_qwen2vl.sh` for evaluation
8. Use the scripts in `src/evaluation` to evaluate the performance of the generation tasks
Featured AI Tools

Gemini
Gemini is the latest generation of AI system developed by Google DeepMind. It excels in multimodal reasoning, enabling seamless interaction between text, images, videos, audio, and code. Gemini surpasses previous models in language understanding, reasoning, mathematics, programming, and other fields, becoming one of the most powerful AI systems to date. It comes in three different scales to meet various needs from edge computing to cloud computing. Gemini can be widely applied in creative design, writing assistance, question answering, code generation, and more.
AI Model
11.4M
Chinese Picks

Liblibai
LiblibAI is a leading Chinese AI creative platform offering powerful AI creative tools to help creators bring their imagination to life. The platform provides a vast library of free AI creative models, allowing users to search and utilize these models for image, text, and audio creations. Users can also train their own AI models on the platform. Focused on the diverse needs of creators, LiblibAI is committed to creating inclusive conditions and serving the creative industry, ensuring that everyone can enjoy the joy of creation.
AI Model
6.9M