EXAONE-3.5-2.4B-Instruct
E
EXAONE 3.5 2.4B Instruct
Overview :
EXAONE-3.5-2.4B-Instruct is a series of bilingual (English and Korean) instruction-tuned generation models developed by LG AI Research, with parameter sizes ranging from 2.4B to 32B. These models support long context processing of up to 32K tokens and demonstrate state-of-the-art performance in real-world use cases and long context understanding while remaining competitive in general domains compared to similarly sized recently released models. The model is particularly suited for scenarios that require processing long texts and multilingual needs, such as automatic translation, text summarization, and conversational systems.
Target Users :
The target audience includes developers and researchers who need to process large volumes of text data and multilingual conversations. With support for long context processing and bilingual capabilities, EXAONE-3.5-2.4B-Instruct is particularly suited for applications that require understanding and generating complex text content, such as automatic translation, text summarization, and conversational systems.
Total Visits: 29.7M
Top Region: US(17.94%)
Website Views : 50.8K
Use Cases
Automatic translation: Translate English text into Korean and vice versa.
Text summarization: Generate brief summaries of long articles or reports.
Conversational systems: Create intelligent assistants capable of understanding and responding to user input.
Features
Number of parameters (excluding embedding layers): 2.14B
Number of layers: 30
Number of attention heads: GQA, with 32 Q heads and 8 KV heads
Vocabulary size: 102,400
Context length: 32,768 tokens
Word embedding binding: True (unlike the 7.8B and 32B models)
How to Use
1. Install the transformers library version 4.43 or higher.
2. Load the model and tokenizer from Hugging Face using AutoModelForCausalLM and AutoTokenizer.
3. Choose or create a prompt, which can be in English or Korean.
4. Use the tokenizer.apply_chat_template method to convert messages and prompts into a format the model can understand.
5. Use the model.generate method to generate text.
6. Use the tokenizer.decode method to convert generated tokens back into text.
7. Print or otherwise utilize the generated text.
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase