EXAONE-3.5-7.8B-Instruct-AWQ
E
EXAONE 3.5 7.8B Instruct AWQ
Overview :
EXAONE 3.5 is a series of instruction-tuned bilingual (English and Korean) generative models developed by LG AI Research, with parameters ranging from 2.4B to 32B. These models support long context processing of up to 32K tokens and demonstrate state-of-the-art performance in real-world use cases and long context understanding, while remaining competitive in general domains compared to similarly sized models released recently. The EXAONE 3.5 models include: 1) the 2.4B model, optimized for deployment on small or resource-constrained devices; 2) the 7.8B model, matching the size of predecessor models but offering improved performance; 3) the 32B model, providing powerful performance.
Target Users :
The target audience includes developers and researchers who require long context processing and bilingual text generation. The EXAONE-3.5-7.8B-Instruct-AWQ model, known for its advanced performance and long context comprehension capability, is particularly suitable for complex tasks involving large datasets and multilingual content, such as machine translation, text summarization, and dialogue systems.
Total Visits: 29.7M
Top Region: US(17.94%)
Website Views : 49.1K
Use Cases
Use the EXAONE-3.5-7.8B-Instruct-AWQ model for machine translation of long texts.
Develop a multi-turn dialogue system using the model to provide a more natural and fluent conversational experience.
Utilize the model for text summarization and key information extraction when handling large volumes of text data.
Features
Supports long context processing, with a maximum of 32K tokens.
Demonstrates state-of-the-art performance in real-world use cases and long context understanding.
Remains competitive in general domains compared to recently released similarly sized models.
Supports bilingual (English and Korean) generation.
Provides AWQ quantized weights for 4-bit group-wise weight quantization (W4A16g128).
Supports various deployment frameworks, such as TensorRT-LLM, vLLM, SGLang, etc.
Offers pre-quantized EXAONE 3.5 models in GGUF format.
How to Use
1. Install the necessary libraries, such as transformers and autoawq.
2. Load the EXAONE-3.5-7.8B-Instruct-AWQ model and tokenizer from Hugging Face.
3. Prepare the input text, which can be in English or Korean.
4. Use the tokenizer to encode the input text.
5. Pass the encoded input to the model for generation.
6. Adjust the model parameters as needed, such as maximum new token count and whether to sample.
7. Output the generated text and decode it using the tokenizer.
8. Analyze the generated text and utilize it for further application development.
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase