

Physavatar
Overview :
PhysAvatar is an innovative framework that combines inverse rendering and inverse physics to automatically estimate the shape, appearance, and physical parameters of clothing from multi-view video data. It employs grid-aligned 4D Gaussian spacetime mesh tracking technology and a physics-based inverse renderer to estimate the intrinsic material properties. PhysAvatar integrates a physics simulator, using gradient-based optimization methods to principled estimate clothing physical parameters. These innovative capabilities enable PhysAvatar to render high-quality avatars of people wearing loose clothing in new viewpoints under motions and illuminations unseen during training.
Target Users :
A framework for modeling and rendering 3D avatars of people wearing loose clothing.
Use Cases
High-fidelity rendering of 3D human models for the film industry
Building realistic clothing animations in games and virtual reality
Showcasing the realistic wear effect of clothing on e-commerce platforms
Features
Multi-view video data input
Dynamic mesh tracking
Clothing physical parameter estimation
Physics-based differential rendering
Support rendering under new motions and new lighting conditions
Featured AI Tools

Sora
AI video generation
17.0M

Animate Anyone
Animate Anyone aims to generate character videos from static images driven by signals. Leveraging the power of diffusion models, we propose a novel framework tailored for character animation. To maintain consistency of complex appearance features present in the reference image, we design ReferenceNet to merge detailed features via spatial attention. To ensure controllability and continuity, we introduce an efficient pose guidance module to direct character movements and adopt an effective temporal modeling approach to ensure smooth cross-frame transitions between video frames. By extending the training data, our method can animate any character, achieving superior results in character animation compared to other image-to-video approaches. Moreover, we evaluate our method on benchmarks for fashion video and human dance synthesis, achieving state-of-the-art results.
AI video generation
11.4M