

CHOIS
Overview :
Controllable Human-Object Interaction Synthesis (CHOIS) is an advanced technology that simultaneously generates object and human movements based on linguistic descriptions, initial object and human states, and sparse object trajectory points. This technology is crucial for simulating realistic human behavior, particularly in scenarios that require precise hand-object contact and appropriate ground-supported interactions. CHOIS improves the matching between generated object movements and input object trajectory points by introducing object geometric loss as supplementary supervisory information and designing guiding terms during the training and sampling process of the diffusion model to enforce contact constraints, thereby ensuring the authenticity of the interactions.
Target Users :
The target audience includes computer vision researchers, animators, game developers, and simulation engineers. The CHOIS technology can assist them in creating more realistic and natural human-object interaction scenes, enhancing the realism of simulations and animations, while also providing richer interactive experiences for games and virtual reality applications.
Use Cases
1. Researchers use CHOIS technology to simulate human interaction with furniture in a home environment.
2. Animators utilize CHOIS to generate natural interaction scenes between characters and props in films.
3. Game developers employ CHOIS technology to design realistic interaction motions for NPCs in games.
Features
- Generate synchronized object and human movements based on linguistic descriptions.
- Synthesize object and human movements using conditional diffusion models.
- Achieve interaction synthesis in contextual environments through sparse object trajectory points as input conditions.
- Introduce object geometric loss to enhance the matching between generated object movements and input trajectory points.
- Design guiding terms to enforce contact constraints during the sampling process.
- Support extraction of trajectory points from 3D scenes as input conditions.
- Capable of handling complex interaction scenarios that require precise hand-object contact and ground support.
How to Use
1. Prepare the initial data for objects and human states.
2. Provide a linguistic description that outlines the desired human-object interaction scene.
3. Define sparse object trajectory points to serve as input conditions for interaction synthesis.
4. Use the CHOIS model, inputting the aforementioned data and description.
5. The model will generate synchronized object and human movements based on the input.
6. Observe whether the generated interaction meets expectations and, if necessary, adjust the input conditions and regenerate.
7. Apply the generated interaction to the corresponding 3D scene or animation.
Featured AI Tools
English Picks

Pika
Pika is a video production platform where users can upload their creative ideas, and Pika will automatically generate corresponding videos. Its main features include: support for various creative idea inputs (text, sketches, audio), professional video effects, and a simple and user-friendly interface. The platform operates on a free trial model, targeting creatives and video enthusiasts.
Video Production
17.6M

Haiper
Haiper AI is driven by the mission to build the best perceptual foundation models for the next generation of content creation. It offers the following key features: Text-to-Video, Image Animation, Video Rewriting, Director's View.
Haiper AI can seamlessly transform text content and static images into dynamic videos. Simply drag and drop images to bring them to life. Using Haiper AI's rewriting tool, you can easily modify video colors, textures, and elements to elevate the quality of your visual content. With advanced control tools, you can adjust camera angles, lighting effects, character poses, and object movements like a director.
Haiper AI is suitable for a variety of scenarios, such as content creation, design, marketing, and more. For pricing information, please refer to the official website.
Video Production
9.8M