L4GM
L
L4GM
Overview :
L4GM is a 4D large-scale reconstruction model that can quickly generate animated objects from single-view video input. It utilizes a novel dataset comprising multi-view videos showcasing animated objects rendered in Objaverse. This dataset includes 44K distinct objects and 110K animations, rendered from 48 viewpoints, generating 12M videos encompassing a total of 300M frames. L4GM is built upon the pre-trained 3D large reconstruction model LGM, which can output 3D Gaussian ellipsoids from multi-view image input. L4GM outputs a 3D Gaussian Splatting representation for each frame, subsequently upsampled to a higher frame rate for temporal smoothing. Moreover, L4GM incorporates time-self attention layers to assist in learning temporal consistency and utilizes multi-view rendering loss at each timestep for model training.
Target Users :
The L4GM model is suitable for professionals and researchers who need to quickly generate high-quality animated 3D objects, such as in film production, game development, and virtual reality. It can significantly improve animation production efficiency, reduce costs, and provide creators with greater creative freedom.
Total Visits: 206.7K
Top Region: US(31.42%)
Website Views : 61.0K
Use Cases
Rapid Generation of Animated Characters in Film Production
Creation of Dynamic Environments and Characters in Game Development
Construction of Interactive 3D Scenes in Virtual Reality
Features
Generate 4D Objects from Videos
Support Reconstruction of Long Videos and High Frame Rate Videos
Increase Frame Rate through 4D Interpolation Models
Employ U-Net Architecture and Self-Attention Mechanisms
Support Automatic Reconstruction and Temporal Consistency
Leverage Multi-View Rendering Loss for Model Training
How to Use
1. Prepare a single-view video input.
2. Perform 4D reconstruction using the L4GM model.
3. Observe the 3D Gaussian Splatting representation output by the model.
4. Enhance the video frame rate using the interpolation model.
5. Ensure temporal consistency through the self-attention mechanism.
6. Optimize model training using multi-view rendering loss.
7. Apply the generated animated objects to the desired scenes or projects.
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase