# Mixture of Experts Model

Moonlight
Moonlight is a 16B parameter Mixture of Experts (MoE) model trained using the Muon optimizer, demonstrating outstanding performance in large-scale training. By incorporating weight decay and adjusting parameter update ratios, it significantly enhances training efficiency and stability. This model surpasses existing models in various benchmarks while substantially reducing the computational resources required for training. Moonlight's open-source implementation and pre-trained models provide researchers and developers with a powerful toolset, supporting diverse natural language processing tasks such as text generation and code generation.
AI Model
50.8K

Deepseek VL2 Small
DeepSeek-VL2 is a series of advanced large-scale mixture of experts (MoE) visual language models, significantly improved compared to its predecessor DeepSeek-VL. This model series demonstrates exceptional capabilities across various tasks, including visual question answering, optical character recognition, document/table/chart understanding, and visual localization. Comprising three variants: DeepSeek-VL2-Tiny, DeepSeek-VL2-Small, and DeepSeek-VL2, with 1 billion, 2.8 billion, and 4.5 billion active parameters respectively, DeepSeek-VL2 achieves competitive or state-of-the-art performance against existing dense and MoE-based open-source models, even with a similar or fewer number of active parameters.
AI Model
55.2K

Aria
Aria is a multimodal native mixture of experts model that excels in multimodal, language, and coding tasks. It performs exceptionally well in video and document understanding, supporting up to 64K multimodal input, with the ability to describe a 256-frame video in just 10 seconds. The model has 25.3 billion parameters and can be loaded on a single A100 (80GB) GPU using bfloat16 precision. Aria was developed to meet the needs for multimodal data understanding, particularly in video and document processing. It is an open-source model aimed at advancing multimodal artificial intelligence.
AI Model
51.1K
Fresh Picks

GRIN MoE
GRIN-MoE is a Mixture of Experts (MoE) model developed by Microsoft, focusing on enhancing performance in resource-limited environments. By employing SparseMixer-v2 to estimate the gradient for expert routing, GRIN-MoE achieves model training scalability without relying on expert parallel processing or token dropping, unlike traditional MoE training methods. It excels particularly in coding and mathematical tasks, making it suitable for scenarios that demand strong reasoning capabilities.
AI Model
47.7K

Yuan2.0 M32 Hf Int8
Yuan2.0-M32-hf-int8 is a mixture of experts (MoE) language model featuring 32 experts, of which 2 are active. By adopting a new routing network—the attention router—it enhances the efficiency of expert selection, resulting in an accuracy improvement of 3.8% compared to models using traditional routing networks. Yuan2.0-M32 was trained from scratch on 200 billion tokens, with its training computation demand being just 9.25% of that required by a dense model of equivalent parameter size. This model is competitive in programming, mathematics, and various specialized fields while utilizing only 3.7 billion active parameters, which is a small portion of a total of 4 billion parameters. The forward computation per token requires only 7.4 GFLOPS, just 1/19th of what Llama3-70B demands. Yuan2.0-M32 outperformed Llama3-70B in the MATH and ARC-Challenge benchmark tests, achieving accuracy rates of 55.9% and 95.8%, respectively.
AI Model
51.3K
Featured AI Tools

Flow AI
Flow is an AI-driven movie-making tool designed for creators, utilizing Google DeepMind's advanced models to allow users to easily create excellent movie clips, scenes, and stories. The tool provides a seamless creative experience, supporting user-defined assets or generating content within Flow. In terms of pricing, the Google AI Pro and Google AI Ultra plans offer different functionalities suitable for various user needs.
Video Production
43.1K

Nocode
NoCode is a platform that requires no programming experience, allowing users to quickly generate applications by describing their ideas in natural language, aiming to lower development barriers so more people can realize their ideas. The platform provides real-time previews and one-click deployment features, making it very suitable for non-technical users to turn their ideas into reality.
Development Platform
46.1K

Listenhub
ListenHub is a lightweight AI podcast generation tool that supports both Chinese and English. Based on cutting-edge AI technology, it can quickly generate podcast content of interest to users. Its main advantages include natural dialogue and ultra-realistic voice effects, allowing users to enjoy high-quality auditory experiences anytime and anywhere. ListenHub not only improves the speed of content generation but also offers compatibility with mobile devices, making it convenient for users to use in different settings. The product is positioned as an efficient information acquisition tool, suitable for the needs of a wide range of listeners.
AI
43.6K

Minimax Agent
MiniMax Agent is an intelligent AI companion that adopts the latest multimodal technology. The MCP multi-agent collaboration enables AI teams to efficiently solve complex problems. It provides features such as instant answers, visual analysis, and voice interaction, which can increase productivity by 10 times.
Multimodal technology
45.3K
Chinese Picks

Tencent Hunyuan Image 2.0
Tencent Hunyuan Image 2.0 is Tencent's latest released AI image generation model, significantly improving generation speed and image quality. With a super-high compression ratio codec and new diffusion architecture, image generation speed can reach milliseconds, avoiding the waiting time of traditional generation. At the same time, the model improves the realism and detail representation of images through the combination of reinforcement learning algorithms and human aesthetic knowledge, suitable for professional users such as designers and creators.
Image Generation
44.2K

Openmemory MCP
OpenMemory is an open-source personal memory layer that provides private, portable memory management for large language models (LLMs). It ensures users have full control over their data, maintaining its security when building AI applications. This project supports Docker, Python, and Node.js, making it suitable for developers seeking personalized AI experiences. OpenMemory is particularly suited for users who wish to use AI without revealing personal information.
open source
43.9K

Fastvlm
FastVLM is an efficient visual encoding model designed specifically for visual language models. It uses the innovative FastViTHD hybrid visual encoder to reduce the time required for encoding high-resolution images and the number of output tokens, resulting in excellent performance in both speed and accuracy. FastVLM is primarily positioned to provide developers with powerful visual language processing capabilities, applicable to various scenarios, particularly performing excellently on mobile devices that require rapid response.
Image Processing
42.0K
Chinese Picks

Liblibai
LiblibAI is a leading Chinese AI creative platform offering powerful AI creative tools to help creators bring their imagination to life. The platform provides a vast library of free AI creative models, allowing users to search and utilize these models for image, text, and audio creations. Users can also train their own AI models on the platform. Focused on the diverse needs of creators, LiblibAI is committed to creating inclusive conditions and serving the creative industry, ensuring that everyone can enjoy the joy of creation.
AI Model
6.9M