

SC GS
Overview :
SC-GS is a novel representation technique that utilizes sparse control points and dense Gaussian functions to represent the motion and appearance of dynamic scenes, respectively. It learns a compact 6-degree-of-freedom (6DOF) transformation basis using a small number of control points, which can be locally interpolated through weight interpolation to obtain a motion field for the 3D Gaussian function. By employing a deformation MLP to predict the time-varying 6DOF transformation for each control point, the method reduces the complexity of learning and enhances its capacity, achieving coherent spatiotemporal motion patterns. Simultaneously, it conjointly learns the 3D Gaussian function, the canonical space positions of the control points, and the deformation MLP to reconstruct the 3D scene's appearance, geometry, and dynamics. During training, the position and number of control points are adaptively adjusted to suit the motion complexity of different regions. A constraint loss function that enforces spatial continuity and local rigidity is also adopted. Thanks to the explicit sparsity of the motion representation and the separation of appearance, this method enables user-controlled motion editing while preserving high-fidelity appearance. Extensive experiments demonstrate that the proposed method outperforms existing approaches in new view synthesis and high-speed rendering, and supports novel applications of user-controlled motion editing with preserved appearance.
Target Users :
New View Synthesis, High-Fidelity Animation Generation, Special Effects Production, Motion Completion, Virtual Reality, etc.
Use Cases
Dynamic scene rendering in film special effects production
Real-world scene modeling and interaction in virtual reality/augmented reality applications
Modifying 3D animation motion sequences by editing control meshes
Features
Decomposes dynamic scenes into sparse control points (motion representation) and dense Gaussian functions (appearance representation)
Utilizes a deformation MLP to predict the time-varying 6DOF transformation for each control point
Derives the 3D Gaussian function's motion field by interpolating control points
Jointly learns Gaussian functions, control point positions, and deformation MLP
Adaptively adjusts control point positions and quantities
Applies a constraint loss function to enforce motion continuity and local rigidity
Supports user-interactive editing of motion corresponding to the control points
Featured AI Tools

Sora
AI video generation
17.0M

Animate Anyone
Animate Anyone aims to generate character videos from static images driven by signals. Leveraging the power of diffusion models, we propose a novel framework tailored for character animation. To maintain consistency of complex appearance features present in the reference image, we design ReferenceNet to merge detailed features via spatial attention. To ensure controllability and continuity, we introduce an efficient pose guidance module to direct character movements and adopt an effective temporal modeling approach to ensure smooth cross-frame transitions between video frames. By extending the training data, our method can animate any character, achieving superior results in character animation compared to other image-to-video approaches. Moreover, we evaluate our method on benchmarks for fashion video and human dance synthesis, achieving state-of-the-art results.
AI video generation
11.4M