Ruyi-Mini-7B
R
Ruyi Mini 7B
Overview :
Ruyi-Mini-7B is an open-source image-to-video generation model developed by the CreateAI team, featuring approximately 7.1 billion parameters. It can generate video frames with resolutions ranging from 360p to 720p, for durations of up to 5 seconds. The model supports various aspect ratios and enhances motion and camera control capabilities, providing greater flexibility and creativity. It is released under the Apache 2.0 license, allowing users to freely use and modify it.
Target Users :
The target audience includes video creators, animators, game developers, and researchers. Ruyi-Mini-7B is suitable for them as it offers an innovative way to generate dynamic video content from static images, which can be used for animations, game backgrounds, advertisements, and other multimedia content.
Total Visits: 29.7M
Top Region: US(17.94%)
Website Views : 70.4K
Use Cases
- Video creators use Ruyi-Mini-7B to generate animated backgrounds from static images.
- Game developers leverage the model to create dynamic backgrounds for game characters.
- Advertisers utilize the model to generate engaging video content for advertisements.
Features
- Video compression and decompression: Casual VAE module reduces spatial resolution by 1/8 and temporal resolution by 1/4.
- 3D full-attention video data generation: Diffusion Transformer module employs 2D Normalized-RoPE for spatial dimensions and Sin-Cos positional embeddings for temporal dimensions, trained using a DDPM model.
- Semantic feature extraction: Utilizes the CLIP model to extract semantic features from input images, guiding the entire video generation process.
- Multi-resolution support: Capable of handling video generation at resolutions from 360p to 720p.
- Motion and camera controls: Enhances the flexibility and creativity of video generation.
- Open-source license: Released under Apache 2.0, allowing users to freely use and modify the model.
- Efficient video generation: The model can rapidly generate video content of up to 5 seconds in length.
How to Use
1. Clone the Ruyi-Models repository from GitHub.
2. Navigate to the Ruyi-Models directory.
3. Install the dependencies listed in requirements.txt using pip.
4. Run the model by executing python3 predict_i2v.py.
5. Alternatively, use the ComfyUI wrapper from the GitHub repository to run the model.
6. Input an image and wait for the model to generate the video.
7. Adjust motion and camera control parameters as needed to optimize the video effect.
8. Export the generated video content.
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase