EXAONE-3.5-32B-Instruct
E
EXAONE 3.5 32B Instruct
Overview :
EXAONE-3.5-32B-Instruct is a series of instruction-tuned bilingual (English and Korean) generation models developed by LG AI Research, consisting of variants ranging from 2.4B to 32B parameters. These models support long context processing of up to 32K tokens and exhibit state-of-the-art performance in real-world use cases and long context understanding, while remaining competitive in general domains compared to similarly sized recently released models.
Target Users :
The target audience includes developers and researchers who need to generate and process text in multilingual environments. With the model's support for long context processing and bilingual capabilities, it is particularly suitable for scenarios that involve handling lengthy texts and multilingual data, such as machine translation, text summarization, and conversational systems.
Total Visits: 29.7M
Top Region: US(17.94%)
Website Views : 44.2K
Use Cases
Develop a multilingual chatbot using the EXAONE-3.5-32B-Instruct model to provide a smooth conversational experience.
Use the model in a machine translation project for efficient translation from English to Korean.
As a content creator's aid, use the model to generate creative copy and article drafts.
Features
Supports long context processing capabilities of up to 32,768 tokens.
Demonstrates state-of-the-art performance in various real-world use cases.
Offers models in three different parameter sizes: 2.4B, 7.8B, and 32B to meet diverse deployment needs.
The model is instruction-tuned, making it particularly suitable for dialog and text generation tasks.
Supports bilingual capabilities (English and Korean), broadening the model's application range.
Performs excellently on multiple evaluation benchmarks, such as MT-Bench and LiveBench.
Provides pre-quantized models supporting various quantization types for optimized inference performance.
How to Use
1. Install necessary libraries, such as `transformers` and `torch`.
2. Use `AutoModelForCausalLM` and `AutoTokenizer` to load the model and tokenizer from Hugging Face.
3. Prepare input prompts, which can be in English or Korean.
4. Utilize the system prompts provided by the model to construct a dialog message template.
5. Pass the message template to the tokenizer to obtain input IDs.
6. Use the model's `generate` method to produce the text.
7. Use the tokenizer's `decode` method to convert the generated tokens back into text.
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase