OneGen
O
Onegen
Overview :
OneGen is an efficient single-pass generation and retrieval framework designed for large language models (LLMs), intended for fine-tuning generation, retrieval, or mixed tasks. The core idea is to integrate generation and retrieval tasks within the same context by assigning the retrieval task to retrieval tokens generated autoregressively. This enables the LLM to perform both tasks in a single forward pass. This approach not only reduces deployment costs but also significantly decreases inference costs, as it avoids the need for two forward pass computations for queries.
Target Users :
OneGen is designed for researchers and developers in the field of natural language processing, particularly those interested in generation and retrieval tasks for large language models. It helps them train and infer models more efficiently while reducing resource consumption.
Total Visits: 474.6M
Top Region: US(19.34%)
Website Views : 50.2K
Use Cases
Used for entity linking tasks, quickly identifying entities in text through pre-trained models.
In single-hop question answering tasks, generates accurate answers using the model.
Applied in multi-hop question answering tasks, finding answers to questions through the model's inference process.
Features
Unified handling of generation and retrieval tasks, lowering deployment costs.
Incorporates retrieval during the generation process, avoiding dual forward pass computations for queries.
Supports various tasks including entity linking, single-hop question answering, and multi-hop question answering.
Provides easy access to pre-trained model downloads for quicker user onboarding.
Supports training models from scratch with flexible configuration options.
Includes detailed evaluation scripts to facilitate user assessment of model performance.
How to Use
1. Clone the OneGen repository to your local environment.
2. Create and activate a Python virtual environment.
3. Install the required dependencies.
4. Download and extract the dataset in preparation for training or inference.
5. Optionally download a pre-trained model as needed.
6. Configure the model parameters and paths.
7. Run the inference script to make model predictions.
8. Use the evaluation script to assess model performance.
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase