

Comfyui LumaAI API
Overview :
ComfyUI-LumaAI-API is a plugin designed for ComfyUI, allowing users to directly utilize the Luma AI API. Based on the Dream Machine video generation model developed by Luma, this plugin enriches video generation possibilities by offering various nodes, such as text-to-video, image-to-video, and video preview, providing convenient tools for video creators and developers.
Target Users :
Designed for video creators, developers, and AI enthusiasts. This plugin is particularly suitable for users who need to quickly generate video content or conduct video editing. It enhances the video generation process with an easy-to-use API interface, making it more efficient and intuitive.
Use Cases
Users can generate educational videos from text prompts using the LumaText2Video node.
Developers can transform static images into dynamic videos for commercial advertisements using the LumaImage2Video node.
Content creators can create smooth transitions between different versions of videos using the LumaInterpolateGenerations node.
Features
LumaAIClient Node: Creates a LumaAI client.
LumaText2Video Node: Generates videos from text prompts.
LumaImage2Video Node: Generates videos from images, which can serve as either the first or last frame of the video.
LumaInterpolateGenerations Node: Interpolates between two generated videos.
LumaExtendGeneration Node: Extends video generation, with options to extend before or after generation.
LumaPreviewVideo Node: Previews the video, adjusting it to 768px for better display in ComfyUI.
ImgBBUpload Node: Uploads images to ImgBB and returns the URL, as the Luma API currently only supports image URLs as inputs.
How to Use
Install the ComfyUI-LumaAI-API plugin into the custom_nodes directory of ComfyUI.
Clone the GitHub repository: git clone https://github.com/lumalabs/ComfyUI-LumaAI-API.git
Install dependencies: cd ComfyUI-LumaAI-API && pip install -r requirements.txt
Configure the Luma AI API key, which can optionally be added to the config.ini file.
Start ComfyUI and begin using the LumaAI API nodes.
Featured AI Tools

Sora
AI video generation
17.0M

Animate Anyone
Animate Anyone aims to generate character videos from static images driven by signals. Leveraging the power of diffusion models, we propose a novel framework tailored for character animation. To maintain consistency of complex appearance features present in the reference image, we design ReferenceNet to merge detailed features via spatial attention. To ensure controllability and continuity, we introduce an efficient pose guidance module to direct character movements and adopt an effective temporal modeling approach to ensure smooth cross-frame transitions between video frames. By extending the training data, our method can animate any character, achieving superior results in character animation compared to other image-to-video approaches. Moreover, we evaluate our method on benchmarks for fashion video and human dance synthesis, achieving state-of-the-art results.
AI video generation
11.4M