Sketch2NeRF
S
Sketch2nerf
Overview :
Sketch2NeRF is a text-to-3D generation framework guided by multi-view sketches. It leverages pre-trained 2D diffusion models (such as Stable Diffusion and ControlNet) to optimize 3D scenes represented by neural radiance fields (NeRF). The method also proposes a novel synchronized generation and reconstruction approach to effectively optimize NeRF. Experiments based on two collected multi-view sketch datasets demonstrate that our method can synthesize consistent 3D content with fine-grained sketch control under high-fidelity text prompts. Extensive results show that our method achieves state-of-the-art performance in terms of sketch similarity and text alignment.
Target Users :
Used for generating 3D content, applicable to virtual reality, game development, animation production, and other fields.
Total Visits: 29.7M
Top Region: US(17.94%)
Website Views : 54.6K
Use Cases
Generate realistic 3D game scenes using Sketch2NeRF
Synthesize realistic objects for use in virtual reality environments with Sketch2NeRF
Create detailed 3D animation scenes using Sketch2NeRF
Features
Control 3D Generation with Sketches
Synthesize High-Fidelity 3D Content
Optimize Neural Radiance Fields (NeRF)
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase