EXAONE-3.5-7.8B-Instruct-GGUF
E
EXAONE 3.5 7.8B Instruct GGUF
Overview :
EXAONE 3.5 is a series of bilingual (English and Korean) instruction-tuned generation models developed by LG AI Research, with parameters ranging from 2.4B to 32B. These models support long-context processing of up to 32K tokens and demonstrate state-of-the-art performance in real-world use cases and long-context understanding, while remaining competitive in general domains when compared to recently released models of similar size. The EXAONE 3.5 model series includes: 1) 2.4B model, optimized for deployment on small or resource-constrained devices; 2) 7.8B model, which matches the size of previous models but offers improved performance; 3) 32B model, providing robust performance.
Target Users :
The target audience includes researchers and developers needing to deploy high-performance language models on resource-constrained devices, as well as application developers working with long-context information and multilingual text generation. The EXAONE 3.5 model is especially well-suited for scenarios involving large data sets and complex language tasks due to its powerful performance and long-context processing capabilities.
Total Visits: 29.7M
Top Region: US(17.94%)
Website Views : 56.6K
Use Cases
Researchers use the EXAONE 3.5 model for semantic understanding and analysis of long texts.
Developers leverage the EXAONE 3.5 model to create multilingual dialogue systems.
Businesses utilize the EXAONE 3.5 model to enhance their customer service automation processes.
Features
Supports long-context processing capabilities of up to 32K tokens.
Demonstrates state-of-the-art performance in real-world use cases and long-context understanding.
Maintains competitiveness in general domains compared to recently released models of similar size.
Offers instruction-tuned 7.8B language models with various precision types including Q8_0, Q6_0, Q5_K_M, Q4_K_M, IQ4_XS, and more.
Supports multiple deployment frameworks such as TensorRT-LLM, vLLM, SGLang, llama.cpp, and Ollama.
Model is optimized for small or resource-constrained devices.
Provides a pre-quantized EXAONE 3.5 model using AWQ and various quantization types.
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase