RoboticsDiffusionTransformer
R
Roboticsdiffusiontransformer
Overview :
RDT-1B is a state-of-the-art imitation learning diffusion transformer with 1 billion parameters (currently the largest). It has been pre-trained on over 1 million multimodal robot scenarios. Given language instructions and RGB images from up to three views, RDT can predict the next 64 robot actions. RDT is compatible with nearly all modern mobile manipulators, including single to dual-arm systems, joint to end-effector configurations, position to velocity control, and even wheeled motions. The model has been fine-tuned on over 6,000 self-collected bimanual scenarios and deployed on the ALOHA dual-arm robot, achieving leading performance in dexterity, zero-shot generalization, and few-shot learning.
Target Users :
The target audience consists of researchers and developers in the fields of robotics, artificial intelligence, and machine learning. The RDT-1B model's applications in multi-robot learning, imitation learning, and bimanual manipulation tasks make it particularly suitable for researchers and developers who need to perform precise operations and learn to execute tasks from language instructions in complex environments.
Total Visits: 474.6M
Top Region: US(19.34%)
Website Views : 50.8K
Use Cases
Researchers fine-tune the RDT-1B model on custom datasets to adapt it to specific operational tasks.
Developers deploy the fine-tuned model on real robotic platforms to achieve automated operations.
Educators use the model to demonstrate to students how complex robotic manipulation tasks can be accomplished through deep learning technology.
Features
Model Implementation: Provides the code for implementing the RDT model.
Pre-trained Model Weights: Offers the pre-trained RDT-1B model weights based on multi-robot data over 1 million steps.
Training and Sampling Scripts: Includes training and sampling scripts with DeepSpeed support.
Real Robot Deployment Example: Provides sample code for deploying the model on actual robots.
How to Use
1. Clone the repository and install the prerequisites.
2. Download and link the multimodal encoder.
3. Modify the configuration file as needed.
4. Prepare the dataset and implement a dataset loader.
5. Calculate dataset statistics.
6. Start fine-tuning the model.
7. After fine-tuning is complete, deploy the model on physical robots.
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase