WSE-3
W
WSE 3
Overview :
Cerebras Systems announced the release of its third-generation 5nm wafer-scale engine (WSE-3), a chip designed specifically for training the industry's largest AI models. WSE-3's performance is twice that of its predecessor, WSE-2, while maintaining the same power consumption and price. Based on a 5nm process, it features 400 billion transistors and utilizes 900,000 AI-optimized compute cores to deliver 125 petaflops of peak AI performance.
Target Users :
Suitable for enterprises and research institutions requiring processing for large-scale AI model training, such as deep learning and natural language processing.
Total Visits: 2.7K
Top Region: US(33.94%)
Website Views : 51.1K
Use Cases
Research institutions utilize Cerebras Systems' WSE-3 chip to train advanced medical diagnostic models.
Companies leverage CS-3 systems for drug discovery and genomics research.
Hyperscale data centers use WSE-3 chips to build AI supercomputers for handling complex data analysis tasks.
Features
Delivers up to 125 petaflops of peak AI performance
Supports AI model training with up to 240 trillion parameters
Single logical memory space can store 240 trillion parameter models
Designed for enterprises and hyperscale needs, scalable to 2048 nodes
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase