

EXAONE 3.5 2.4B Instruct
Overview :
EXAONE-3.5-2.4B-Instruct is a series of bilingual (English and Korean) instruction-tuned generation models developed by LG AI Research, with parameter sizes ranging from 2.4B to 32B. These models support long context processing of up to 32K tokens and demonstrate state-of-the-art performance in real-world use cases and long context understanding while remaining competitive in general domains compared to similarly sized recently released models. The model is particularly suited for scenarios that require processing long texts and multilingual needs, such as automatic translation, text summarization, and conversational systems.
Target Users :
The target audience includes developers and researchers who need to process large volumes of text data and multilingual conversations. With support for long context processing and bilingual capabilities, EXAONE-3.5-2.4B-Instruct is particularly suited for applications that require understanding and generating complex text content, such as automatic translation, text summarization, and conversational systems.
Use Cases
Automatic translation: Translate English text into Korean and vice versa.
Text summarization: Generate brief summaries of long articles or reports.
Conversational systems: Create intelligent assistants capable of understanding and responding to user input.
Features
Number of parameters (excluding embedding layers): 2.14B
Number of layers: 30
Number of attention heads: GQA, with 32 Q heads and 8 KV heads
Vocabulary size: 102,400
Context length: 32,768 tokens
Word embedding binding: True (unlike the 7.8B and 32B models)
How to Use
1. Install the transformers library version 4.43 or higher.
2. Load the model and tokenizer from Hugging Face using AutoModelForCausalLM and AutoTokenizer.
3. Choose or create a prompt, which can be in English or Korean.
4. Use the tokenizer.apply_chat_template method to convert messages and prompts into a format the model can understand.
5. Use the model.generate method to generate text.
6. Use the tokenizer.decode method to convert generated tokens back into text.
7. Print or otherwise utilize the generated text.
Featured AI Tools
Chinese Picks

Wenxin Yiyian
Wenxin Yiyian is Baidu's new generation of knowledge-enhanced large language model. It can interact with people in dialogue, answer questions, assist in creation, and help people efficiently and conveniently access information, knowledge, and inspiration. Based on the FlyingPaddle deep learning platform and Wenxin Knowledge Enhancement Large Language Model, it continuously integrates learning from massive data and large-scale knowledge, featuring knowledge enhancement, retrieval enhancement, and dialogue enhancement. We look forward to your feedback to help Wenxin Yiyian continue to improve.
Chatbot
5.4M
English Picks

Bot3 AI
Bot3 AI is your ultimate destination for AI conversational robots. Experience unprecedented levels of intelligent dialogue participation by interacting with AI characters.
Chatbot
2.7M