ComfyUI LLM Party
C
Comfyui LLM Party
Overview :
ComfyUI LLM Party aims to create a complete set of LLM workflow nodes based on the ComfyUI frontend, enabling users to quickly and easily construct their own LLM workflows and seamlessly integrate them into existing image workflows.
Target Users :
The target audience includes individuals and businesses that need to build custom AI assistants, manage industry knowledge bases, create complex agent interaction patterns, access social app interfaces, engage in streaming work, conduct academic research, and adapt models.
Total Visits: 474.6M
Top Region: US(19.34%)
Website Views : 73.4K
Use Cases
Sample workflow for calling LLM via API
Sample workflow using local models
Sample workflow for managing local models with Ollama
Sample workflow for knowledge base RAG search
Sample invocation for code interpreter
Features
Supports various API calls and integration with local large models
Modular implementation of tool invocation
Supports character settings and quick construction of personal AI assistants
Supports industry-specific word vector RAG and GraphRAG for knowledge base management
Facilitates the construction of interaction patterns from single-agent processes to complex agent-to-agent radiative and circular interactions
Enables access to required interfaces for personal social apps (e.g., QQ, Feishu, Discord)
Offers a one-stop LLM + TTS + ComfyUI workflow for streaming professionals
Provides a simple launch for aspiring students' first LLM applications
Supports various parameter tuning interfaces and model adaptations commonly used by researchers
How to Use
Drag workflow into ComfyUI, then use ComfyUI-Manager to install any missing nodes
Use API to call LLM: start_with_LLM_api
Manage local LLM with Ollama: start_with_Ollama
Use locally distributed format LLM: start_with_LLM_local
Use local GGUF format LLM: start_with_LLM_GGUF
Use locally distributed format VLM: start_with_VLM_local
Use local GGUF format VLM: start_with_VLM_GGUF
If using an API, please fill in your base_url (can use relay API, ensure it ends with /v1/) and api_key in the API LLM loading node.
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase