# Model Training
Fresh Picks

Genie Studio
Genie Studio is a one-stop development platform specifically designed by Zhiyuan Robotics for embodied AI scenarios, with full-chain product capabilities covering data acquisition, model training, simulation evaluation, and model inference. It provides developers with a standardized solution from 'acquisition' to 'training' to 'testing' to 'inference', greatly reducing the development threshold and improving development efficiency. The platform promotes the rapid development and application of embodied AI technology through efficient data acquisition, flexible model training, precise simulation evaluation, and seamless model inference. Genie Studio not only provides powerful tools but also supports the large-scale implementation of embodied AI, accelerating the industry's leap to a new stage of standardization, platformization, and mass production.
Development Platform
50.2K

Easevoice Trainer
EaseVoice Trainer is a backend project designed to simplify and enhance the speech synthesis and conversion training process. This project is an improvement based on GPT-SoVITS, focusing on user experience and system maintainability. Its design philosophy differs from the original project, aiming to provide a more modular and customizable solution suitable for various scenarios, from small-scale experiments to large-scale production. This tool can help developers and researchers conduct speech synthesis and conversion research and development more efficiently.
Development & Tools
50.0K

Firecrawl LLMs.txt Generator
The LLMs.txt generator is an online tool powered by Firecrawl, designed to help users generate integrated text files for LLM training and inference from websites. By integrating web content, it provides high-quality text data for training large language models, thereby improving model performance and accuracy. The main advantages of this tool are its simple operation and high efficiency, allowing for the quick generation of required text files. It is primarily aimed at developers and researchers who need a large amount of text data for model training, providing them with a convenient solution.
Model Training and Deployment
63.8K

Mlgym
MLGym is an open-source framework and benchmark developed by Meta's GenAI team and the UCSB NLP team for training and evaluating AI research agents. By offering diverse AI research tasks, it fosters the development of reinforcement learning algorithms and helps researchers train and evaluate models in real-world research scenarios. The framework supports various tasks, including computer vision, natural language processing, and reinforcement learning, aiming to provide a standardized testing platform for AI research.
Model Training and Deployment
56.6K

Kg Gen
kg-gen is an artificial intelligence-based tool that extracts knowledge graphs from plain text. It supports text inputs ranging from single sentences to lengthy documents and can handle conversational-style messages. Leveraging advanced language models and structured output techniques, this tool helps users quickly construct knowledge graphs, suitable for natural language processing, knowledge management, and model training, among other applications. kg-gen provides flexible interfaces and a variety of functionalities designed to streamline the knowledge graph generation process and enhance efficiency.
Knowledge Management
73.4K

Steev
Steev is a tool specifically designed for AI model training, aimed at simplifying the training process and enhancing model performance. It automatically optimizes training parameters, provides real-time monitoring of the training process, and offers code reviews and suggestions to help users complete model training more efficiently. One of Steev's main advantages is its ease of use, requiring no configuration, making it suitable for engineers and researchers aiming to improve model training efficiency and quality. Currently in a free trial phase, users can experience all its features at no cost.
Model Training and Deployment
50.2K

Kolosal AI
Kolosal AI is a tool for training and running large language models (LLMs) on local devices. By streamlining the processes of model training, optimization, and deployment, it enables users to leverage AI technology efficiently on local hardware. The tool supports various hardware platforms, provides fast inference speeds, and offers flexible customization capabilities, making it suitable for a wide range of applications from individual developers to large enterprises. Its open-source nature also allows users to conduct secondary development according to their specific needs.
Model Training and Deployment
72.9K

Open Thoughts
Open Thoughts is a project led by Bespoke Labs and the DataComp community, aimed at curating high-quality open-source reasoning datasets for training advanced small models. The project brings together researchers and engineers from various universities and research institutions, including Stanford University, the University of California, Berkeley, and the University of Washington, dedicated to promoting the development of reasoning models through high-quality datasets. The project is established in response to the growing demand for applications of reasoning models in fields such as mathematics and code reasoning, where high-quality datasets are critical for improving model performance. Currently free to access, the project primarily targets researchers, developers, and professionals interested in reasoning models, with its open-source datasets and tools serving as a significant resource for advancing AI education and research.
AI Model
58.8K

RWKV 6 Mixture Of Experts
Flock of Finches 37B-A11B v0.1 is the latest member of the RWKV family, representing an experimental model with 1.1 billion active parameters. Despite being trained on only 109 billion tokens, it performs comparably to the recently released Finch 14B model on common benchmark tests. This model employs an efficient sparse mixture of experts (MoE) approach, activating only a portion of parameters for any given token, thereby saving time and reducing computational resource usage during training and inference. Although this architectural choice incurs higher VRAM usage, from our perspective, it is highly beneficial to train and operate a model with greater capacity at a lower cost.
AI Model
56.9K

E2M
E2M is a Python library capable of parsing and converting multiple file types into Markdown format. It employs a parser-converter architecture, supporting the conversion of a variety of file formats including doc, docx, epub, html, htm, url, pdf, ppt, pptx, mp3, and m4a. The ultimate aim of the E2M project is to provide high-quality data for Retrieval-Augmented Generation (RAG) and model training or fine-tuning.
Development & Tools
63.5K

TRELLIS
TRELLIS is a native 3D generation model based on a unified structured latent representation and a correction transformer, capable of producing diverse and high-quality 3D assets. The model captures structural (geometric) and texture (appearance) information comprehensively by integrating sparse 3D meshes with dense multi-view visual features extracted from powerful visual foundation models, while maintaining flexibility during the decoding process. TRELLIS can handle up to 2 billion parameters and has been trained on a large dataset of 3D assets containing 500,000 diverse objects. It generates high-quality results conditioned on text or images, significantly surpassing existing methods, including recent approaches of similar scale. TRELLIS also demonstrates flexible output format options and local 3D editing capabilities, which were not provided by previous models. Source code, models, and data will be made available.
3D Modeling
91.9K

Prime
PrimeIntellect-ai/prime is a framework designed for efficient, globally distributed training of AI models over the internet. Through technological innovation, it facilitates cross-regional AI model training, improves computing resource utilization, and reduces training costs, which is critical for AI research and application development that requires significant computational resources.
Model Training and Deployment
52.4K

MM1.5
MM1.5 is a series of multimodal large language models (MLLMs) designed to enhance capabilities in understanding text-rich images, visual reference grounding, and multi-image reasoning. Based on the MM1 architecture, the model adopts a data-centric training approach and systematically explores the impact of different data mixes throughout the model training lifecycle. The MM1.5 model varies from 1B to 30B parameters and includes both dense and mixture of experts (MoE) variants, providing valuable guidance for future MLLM development research through extensive empirical and ablation studies that detail the training processes and decision insights.
AI Model
51.1K

RECE
RECE is a concept erasure technology for text-to-image diffusion models that reliably and efficiently removes specific concepts by introducing regularization terms during model training. This technology is significant in enhancing the safety and controllability of image generation models, especially in scenarios where generating inappropriate content needs to be avoided. The primary advantages of RECE technology include high efficiency, high reliability, and easy integration into existing models.
AI image generation
55.8K

Flux Gym
Flux Gym is a streamlined Web UI designed for FLUX LoRA model training, particularly suited for devices with 12GB, 16GB, or 20GB VRAM. It combines the user-friendliness of the AI-Toolkit project with the flexibility of Kohya Scripts, enabling users to train models without complex terminal operations. Flux Gym allows users to upload images and add descriptions through a simple interface, then start the training process.
AI Model
88.0K

Easy Voice Toolkit
Easy Voice Toolkit is an AI voice toolkit based on open-source voice projects, providing various automated audio tools including speech model training. The toolkit seamlessly integrates to create a complete workflow, allowing users to selectively use these tools or utilize them in sequence to gradually convert raw audio files into ideal speech models.
AI audio editing
86.4K
English Picks

Civita Green
Civita Green is a community platform for AI enthusiasts, artists, and developers, offering AI model training, image and video creation, and art sharing. The platform supports users in creating, sharing, and utilizing various AI models to foster the development of AI art.
AI model training and deployment
127.2K

Ai Toolkit
ai-toolkit is a research-oriented GitHub repository created by Ostris, primarily used for experiments and training with Stable Diffusion models. It contains a variety of AI scripts that support model training, image generation, LoRA extraction, and more. The toolkit is still in development and may have some instability, but it offers rich features and high customizability.
AI Model
68.4K

X Flux
x-flux is a collection of deep learning model training scripts released by the XLabs AI team, featuring LoRA and ControlNet models. These models leverage DeepSpeed for training and support image sizes of 512x512 and 1024x1024, along with corresponding training configuration files and examples. The x-flux model training aims to enhance the quality and efficiency of image generation, making it significant in the field of AI image generation.
AI Model
73.1K

Aimo Progress Prize
This GitHub repository contains training and inference code to replicate our winning solution in the AI Mathematics Olympic (AIMO) Progress Prize 1. Our solution consists of four main parts: a recipe for fine-tuning DeepSeekMath-Base 7B for use in solving math problems using Tool Integrated Reasoning (TIR); two high-quality datasets of about 10 million math questions and solutions; an algorithm for generating solution candidates with coding execution feedback (SC-TIR); and four carefully selected validation sets from AMC, AIME, and MATH to guide model selection and avoid overfitting the public leaderboard.
AI model inference training
63.5K
English Picks

Prime Intellect
Prime Intellect is committed to democratizing AI development on a scalable scale. It offers the discovery of global computing resources, model training, and the capability to co-own smart innovation. By distributing training across clusters, it enables users to train cutting-edge models and co-own the open AI innovation outcomes, including language models and scientific breakthroughs.
Development Platform
67.9K

Prov GigaPath
Prov-GigaPath is a whole-slide foundation model for digital pathology research. Trained on real-world data, it aims to support AI researchers in their studies of pathology foundational models and digital pathology slide data encoding. Developed by multiple authors and published in Nature, it is not suitable for clinical care or any clinical decision-making purposes and is restricted to research use only.
AI medical health
61.8K

Corenet
CoreNet is a deep neural network toolkit that enables researchers and engineers to train both standard and innovative small to large-scale models for a variety of tasks, including foundational models (such as CLIP and LLM), object classification, object detection, and semantic segmentation.
AI Model
50.5K

Llamaparse
LlamaParse, part of the LLAMA project, is a tool for parsing and processing related data. LLAMA is a library for machine learning models, focusing on providing user-friendly interfaces and efficient data processing capabilities.
AI Development Assistant
104.1K

Datadreamer
DataDreamer is a powerful open-source Python library for prompt engineering, synthetic data generation, and training workflows. Designed for simplicity, extreme efficiency, and research-grade quality, DataDreamer supports creating prompt workflows, generating synthetic datasets, aligning and fine-tuning models, instruction tuning, model distillation, and simplifies the sharing and reproducibility of datasets and models.
AI Model
106.0K

V JEPA
Meta has released the Video Joint Embedding Predictive Architecture (V-JEPA) model, a significant step forward in advancing machine intelligence, providing a more pragmatic understanding of the world.
AI Model
55.5K
Chinese Picks

Stemgen
StemGen is an end-to-end music generation model trained to listen to musical contexts and generate appropriate responses. It is built upon a non-autoregressive language modeling architecture, similar to SoundStorm and VampNet. More details can be found in the paper. This page showcases various example outputs of this architecture model.
AI Music Generation
91.1K

Axlearn
AXLearn is a deep learning library built by Apple based on JAX and XLA. It takes an object-oriented approach to address software engineering challenges in large-scale deep learning model development. Its configuration system allows users to combine models from reusable building blocks and integrate with other libraries (such as Flax and Hugging Face transformers). AXLearn aims to scale training, supporting models with hundreds of billions of parameters efficiently trained on thousands of accelerators, suitable for deployment on public clouds. It also adopts a global computation paradigm, allowing users to describe computations on a global virtual machine instead of individual accelerators. AXLearn supports a wide range of applications, including natural language processing, computer vision, and speech recognition, and includes baseline configurations for training state-of-the-art models.
AI Development Assistant
55.8K

Datasaur
Datasaur is a leading NLP data annotation platform that can accelerate project speeds by 10x and improve model performance by 2x. It offers configurable annotation, advanced quality control, and automation features, empowering engineers to focus on building high-quality models.
Development and Tools
95.2K

Volcano Ark
Volcano Ark provides comprehensive functions and services for model training, inference, evaluation, and fine-tuning, and focuses on supporting the large model ecosystem. Curated models ensure model stability, a rich platform of applications and tools, information security, powerful computing capabilities, and professional services. Key functions include Model Marketplace, Model Experience, Model Training & Inference, and Model Applications. Suitable for application scenarios in industries such as automobiles, finance, consumer goods, the broad internet, and education & office.
Model training and deployment
162.3K
- 1
- 2
Featured AI Tools
Chinese Picks

騰訊混元圖像 2.0
騰訊混元圖像 2.0 是騰訊最新發布的 AI 圖像生成模型,顯著提升了生成速度和畫質。通過超高壓縮倍率的編解碼器和全新擴散架構,使得圖像生成速度可達到毫秒級,避免了傳統生成的等待時間。同時,模型通過強化學習算法與人類美學知識的結合,提升了圖像的真實感和細節表現,適合設計師、創作者等專業用戶使用。
圖片生成
91.4K
English Picks

Lovart
Lovart 是一款革命性的 AI 設計代理,能夠將創意提示轉化為藝術作品,支持從故事板到品牌視覺的多種設計需求。其重要性在於打破傳統設計流程,節省時間並提升創意靈感。Lovart 當前處於測試階段,用戶可加入等候名單,隨時體驗設計的樂趣。
AI設計工具
73.1K

Fastvlm
FastVLM 是一種高效的視覺編碼模型,專為視覺語言模型設計。它通過創新的 FastViTHD 混合視覺編碼器,減少了高分辨率圖像的編碼時間和輸出的 token 數量,使得模型在速度和精度上表現出色。FastVLM 的主要定位是為開發者提供強大的視覺語言處理能力,適用於各種應用場景,尤其在需要快速響應的移動設備上表現優異。
AI模型
56.3K

Keysync
KeySync 是一個針對高分辨率視頻的無洩漏唇同步框架。它解決了傳統唇同步技術中的時間一致性問題,同時通過巧妙的遮罩策略處理表情洩漏和麵部遮擋。KeySync 的優越性體現在其在唇重建和跨同步方面的先進成果,適用於自動配音等實際應用場景。
視頻編輯
54.9K

Manus
Manus 是由 Monica.im 研發的全球首款真正自主的 AI 代理產品,能夠直接交付完整的任務成果,而不僅僅是提供建議或答案。它採用 Multiple Agent 架構,運行在獨立虛擬機中,能夠通過編寫和執行代碼、瀏覽網頁、操作應用等方式直接完成任務。Manus 在 GAIA 基準測試中取得了 SOTA 表現,展現了強大的任務執行能力。其目標是成為用戶在數字世界的‘代理人’,幫助用戶高效完成各種複雜任務。
個人助理
1.5M

Trae國內版
Trae是一款專為中文開發場景設計的AI原生IDE,將AI技術深度集成於開發環境中。它通過智能代碼補全、上下文理解等功能,顯著提升開發效率和代碼質量。Trae的出現填補了國內AI集成開發工具的空白,滿足了中文開發者對高效開發工具的需求。其定位為高端開發工具,旨在為專業開發者提供強大的技術支持,目前尚未明確公開價格,但預計會採用付費模式以匹配其高端定位。
開發與工具
145.5K
English Picks

Pika
Pika是一個視頻製作平臺,用戶可以上傳自己的創意想法,Pika會自動生成相關的視頻。主要功能有:支持多種創意想法轉視頻,視頻效果專業,操作簡單易用。平臺採用免費試用模式,定位面向創意者和視頻愛好者。
視頻生成
18.7M
Chinese Picks

Liblibai
LiblibAI是一箇中國領先的AI創作平臺,提供強大的AI創作能力,幫助創作者實現創意。平臺提供海量免費AI創作模型,用戶可以搜索使用模型進行圖像、文字、音頻等創作。平臺還支持用戶訓練自己的AI模型。平臺定位於廣大創作者用戶,致力於創造條件普惠,服務創意產業,讓每個人都享有創作的樂趣。
AI模型
8.0M