

Comfyui LLM Party
Overview :
ComfyUI LLM Party aims to create a complete set of LLM workflow nodes based on the ComfyUI frontend, enabling users to quickly and easily construct their own LLM workflows and seamlessly integrate them into existing image workflows.
Target Users :
The target audience includes individuals and businesses that need to build custom AI assistants, manage industry knowledge bases, create complex agent interaction patterns, access social app interfaces, engage in streaming work, conduct academic research, and adapt models.
Use Cases
Sample workflow for calling LLM via API
Sample workflow using local models
Sample workflow for managing local models with Ollama
Sample workflow for knowledge base RAG search
Sample invocation for code interpreter
Features
Supports various API calls and integration with local large models
Modular implementation of tool invocation
Supports character settings and quick construction of personal AI assistants
Supports industry-specific word vector RAG and GraphRAG for knowledge base management
Facilitates the construction of interaction patterns from single-agent processes to complex agent-to-agent radiative and circular interactions
Enables access to required interfaces for personal social apps (e.g., QQ, Feishu, Discord)
Offers a one-stop LLM + TTS + ComfyUI workflow for streaming professionals
Provides a simple launch for aspiring students' first LLM applications
Supports various parameter tuning interfaces and model adaptations commonly used by researchers
How to Use
Drag workflow into ComfyUI, then use ComfyUI-Manager to install any missing nodes
Use API to call LLM: start_with_LLM_api
Manage local LLM with Ollama: start_with_Ollama
Use locally distributed format LLM: start_with_LLM_local
Use local GGUF format LLM: start_with_LLM_GGUF
Use locally distributed format VLM: start_with_VLM_local
Use local GGUF format VLM: start_with_VLM_GGUF
If using an API, please fill in your base_url (can use relay API, ensure it ends with /v1/) and api_key in the API LLM loading node.
Featured AI Tools

Stable Fast 3D
Stable Fast 3D (SF3D) is a large reconstruction model based on TripoSR that can create textured UV-mapped 3D mesh assets from a single object image. The model is highly trained and can produce a 3D model in less than a second, offering a low polygon count along with UV mapping and texture processing, making it easier to use the model in downstream applications such as game engines or rendering tasks. Additionally, the model predicts material parameters (roughness, metallic) for each object, enhancing reflective behaviors during rendering. SF3D is ideal for fields that require rapid 3D modeling, such as game development and visual effects production.
AI Image Generation
129.2K

Toy Box Flux
Toy Box Flux is an AI-driven 3D rendering model trained to generate images, merging existing 3D LoRA models with the weights of Coloring Book Flux LoRA, resulting in a unique style. This model excels in producing toy design images with specific styles, particularly performing well with objects and human subjects, while animal representations show variability due to insufficient training data. Additionally, it enhances the realism of indoor 3D renderings. The upcoming version 2 plans to strengthen style consistency by blending more generated and pre-existing outputs.
AI Image Generation
119.0K