EXAONE-3.5-7.8B-Instruct
E
EXAONE 3.5 7.8B Instruct
Overview :
EXAONE-3.5-7.8B-Instruct is a series of bilingual (English and Korean) generative models optimized for instructions, developed by LG AI Research, with parameter sizes ranging from 2.4B to 32B. These models support long context processing of up to 32K tokens, demonstrating state-of-the-art performance in real-world applications and long-context understanding, while remaining competitive in general domains compared to similarly sized models recently released.
Target Users :
The target audience includes researchers and developers who require long-context understanding and text generation, particularly when handling large datasets and complex tasks. The model's bilingual capability also makes it suitable for users needing text generation in both English and Korean.
Total Visits: 29.7M
Top Region: US(17.94%)
Website Views : 48.6K
Use Cases
Used for generating high-quality English and Korean text
Provides advanced performance in long-context understanding tasks
Demonstrates competitiveness across various real-world use cases
Features
Number of parameters (excluding embeddings): 6.98B
Number of layers: 32
Number of attention heads: GQA, with 32 Q heads and 8 KV heads
Vocabulary size: 102,400
Context length: 32,768 tokens
Supports long context processing up to 32K tokens
Demonstrates state-of-the-art performance in real-world applications
How to Use
1. Install the transformers library version 4.43 or higher
2. Use AutoModelForCausalLM and AutoTokenizer to load the model and tokenizer from Hugging Face
3. Choose or design a prompt
4. Process messages using the tokenizer.apply_chat_template method to generate input IDs
5. Use the model's generate method to produce text
6. Decode the output tokens into text using the tokenizer.decode method
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase