StructLDM
S
Structldm
Overview :
StructLDM is a structured latent diffusion model designed to learn 3D human generation from 2D images. It can generate diverse, viewpoint-consistent human figures and supports various levels of controllable generation and editing, such as combined generation and local clothing editing. The model enables garment-independent generation and editing without requiring clothing types or mask conditions. This project was proposed by Tao Hu, Fangzhou Hong, and Ziwei Liu from the S-Lab of Nanyang Technological University, with related research published at ECCV 2024.
Target Users :
The target audience includes researchers, developers, and 3D content creators who can utilize the StructLDM model for generating and editing 3D human figures for academic research, game development, virtual reality, and other fields.
Total Visits: 474.6M
Top Region: US(19.34%)
Website Views : 47.2K
Use Cases
Researchers utilize StructLDM for generating studies on human poses and expressions
Game developers employ this model to create virtual characters
3D human model generation and interaction in virtual reality applications
Features
Learning 3D human generation from 2D images
Generating diverse, viewpoint-consistent human figures
Supporting combined generation with mixed parts
Enabling local clothing editing and 3D virtual fitting
Generation and editing without clothing type or mask conditions
Providing the option to download pre-trained models and sample data
Supporting training and testing with custom datasets
How to Use
1. Install the required dependencies and environment, it is recommended to use Anaconda to manage the Python environment.
2. Download the pre-trained model, sample data, and necessary assets, and place them in the specified directory.
3. Register and download the SMPL model, placing it in the smpl_data folder.
4. Run the generation script, such as bash scripts/renderpeople.sh gpu_ids, and the results will be found in DATA_DIR/result/test_output.
5. Prepare your own dataset by referring to sample_data and modify the corresponding paths in the configuration file.
6. Use the training script to train the model; the trained model will be stored in DATA_DIR/result/trained_model/modelname/diffusion_xx.pt.
7. Run the inference script for model testing; samples will be stored in DATA_DIR/result/trained_model/modelname/samples.
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase