# Edge Computing

RF-DETR
RF DETR
RF-DETR is a transformer-based real-time object detection model designed for high accuracy and real-time performance on edge devices. It surpasses 60 AP on the Microsoft COCO benchmark, boasting competitive performance and fast inference speed, suitable for various real-world applications. RF-DETR aims to solve real-world object detection problems and is applicable to industries requiring efficient and accurate detection, such as security, autonomous driving, and intelligent monitoring.
Target Detection
64.6K
Fresh Picks
OmniAudio-2.6B
Omniaudio 2.6B
OmniAudio-2.6B is a multimodal model with 2.6 billion parameters that seamlessly processes both text and audio inputs. This model combines Gemma-2B, Whisper Turbo, and a custom projection module. Unlike the traditional method of chaining ASR and LLM models, it unifies both capabilities in an efficient architecture, achieving minimal latency and resource overhead. This enables it to securely and rapidly process audio-text directly on edge devices such as smartphones, laptops, and robots.
Speech Recognition
55.8K
SmolVLM
Smolvlm
SmolVLM is a compact yet powerful visual language model (VLM) with 2 billion parameters, leading in efficiency and memory usage among similar models. It is fully open-source, with all model checkpoints, VLM datasets, training recipes, and tools released under the Apache 2.0 license. The model is designed for local deployment in browsers or edge devices, reducing inference costs and allowing for user customization.
AI Model
53.3K
English Picks
Workers AI
Workers AI
Workers AI is a product launched by Cloudflare for running machine learning models in edge computing environments. It allows users to deploy and execute AI applications across Cloudflare's global network nodes, capable of handling various tasks such as image classification, text generation, and object detection. The introduction of Workers AI signifies Cloudflare's deployment of GPU resources in its global network, enabling developers to build and deploy ambitious AI applications close to users. Key advantages of this product include global distributed deployment, low latency, high performance, and reliability, with both free and paid plans available.
Machine Learning
46.9K
Moonshine
Moonshine
Moonshine is a suite of speech-to-text models optimized for resource-constrained devices, making it ideal for real-time, on-device applications such as live transcription and voice command recognition. It outperforms the OpenAI Whisper model of the same size in word error rate (WER) on test datasets used in the OpenASR leaderboard maintained by HuggingFace. Additionally, Moonshine's computational requirements vary with the length of the input audio, allowing for quicker processing of shorter audio compared to the Whisper model, which processes everything in 30-second chunks. Moonshine processes 10-second audio segments at a speed five times faster than Whisper while maintaining the same or better WER.
Speech Recognition
56.3K
Quantized Llama
Quantized Llama
The Llama model is a large language model developed by Meta. Through quantization technology, it reduces model size, increases speed, and maintains quality and security. These models are especially suitable for mobile devices and edge deployments, enabling fast on-device inference on resource-constrained devices while minimizing memory usage. The development of the Quantized Llama model marks an important advancement in mobile AI, allowing more developers to build and deploy high-quality AI applications without requiring extensive computational resources.
Model Training and Deployment
45.5K
TEN-framework
TEN Framework
TEN-framework is an innovative AI agent framework designed to provide high-performance support for real-time multimodal interactions. It supports multiple programming languages and platforms, achieves edge-cloud integration, and flexibly transcends the limitations of single models. By managing agent states in real time, TEN-framework enables AI agents to dynamically respond and adjust their behavior instantly. This framework is built to meet the growing demand for complex AI applications, especially in audio-visual contexts. It not only offers efficient development support but also promotes innovation and application of AI technologies through modular and reusable extensions.
Development & Tools
56.9K
Ministral-8B-Instruct-2410
Ministral 8B Instruct 2410
Ministral-8B-Instruct-2410 is a large language model developed by the Mistral AI team, designed for local intelligence, on-device computation, and edge use cases. It excels among models of similar size, supporting a 128k context window and interleaved sliding window attention mechanism. The model is capable of training on multilingual and code data, supports function calling, and has a vocabulary size of 131k. The Ministral-8B-Instruct-2410 model demonstrates outstanding performance across various benchmarks, including knowledge and common sense, code and mathematics, and multilingual support. Its performance in chat/arena scenarios (as judged by gpt-4o) is particularly impressive, making it adept at handling complex conversations and tasks.
AI Model
57.1K
Fresh Picks
Llama 3.2
Llama 3.2
Llama 3.2 is a series of large language models (LLMs) pre-trained and fine-tuned on multilingual text models of sizes 1B and 3B, and models for text and image input/output at sizes 11B and 90B. These models are designed for developing high-performance and efficient applications. The models of Llama 3.2 can run on mobile devices and edge devices, support multiple programming languages, and can be used to build agent applications through the Llama Stack.
AI Model
53.3K
Grounding DINO 1.5 API
Grounding DINO 1.5 API
Grounding DINO 1.5, developed by IDEA Research, is a series of advanced models designed to push the boundaries of open-world object detection technology. The series includes two models: Grounding DINO 1.5 Pro and Grounding DINO 1.5 Edge, optimized for diverse applications and edge computing scenarios, respectively.
AI image detection and recognition
81.1K
VILA
VILA
VILA is a pre-trained visual language model (VLM) that achieves video and multi-image understanding capabilities through pre-training with large-scale interleaved image-text data. VILA can be deployed on edge devices using the AWQ 4bit quantization and TinyChat framework. Key advantages include: 1) Interleaved image-text data is crucial for performance enhancement; 2) Not freezing the large language model (LLM) during interleaved image-text pre-training promotes context learning; 3) Re-mixing text instruction data is critical for boosting VLM and plain text performance; 4) Token compression can expand the number of video frames. VILA demonstrates captivating capabilities including video inference, context learning, visual reasoning chains, and better world knowledge.
AI Model
84.7K
Octopus-V2
Octopus V2
Developed by Stanford University's NexaAI, Octopus-V2-2B is an open-source large language model with 2 billion parameters, specifically tailored for Android API function calls. It utilizes a unique functional tokenization strategy for both training and inference, achieving performance comparable to GPT-4 while improving inference speed. Octopus-V2-2B is particularly suited for edge computing devices, allowing for direct on-device execution and supporting a wide range of applications.
AI Model
188.5K
Chooch AI Vision
Chooch AI Vision
Chooch AI Vision Platform is an AI vision platform that utilizes AI algorithms to enable real-time analysis and recognition of images and videos. This platform can help enterprises rapidly detect and analyze thousands of visual objects, images, or actions, and take immediate action when an image is recognized. With highly precise and efficient operations, it can enhance business operational performance. Chooch AI Vision Platform offers various pre-trained AI models for quick deployment and supports usage on both cloud and edge devices. Pricing is customized based on specific needs.
AI Model
52.4K
Blaize
Blaize
Blaize is an AI edge computing hardware and software platform that is more efficient, flexible, accurate, and cost-effective. It enables the deployment of AI at the edge without compromising performance, bringing significant value to market transformation and improving the quality of work and life.
Development & Tools
51.9K
Featured AI Tools
Flow AI
Flow AI
Flow is an AI-driven movie-making tool designed for creators, utilizing Google DeepMind's advanced models to allow users to easily create excellent movie clips, scenes, and stories. The tool provides a seamless creative experience, supporting user-defined assets or generating content within Flow. In terms of pricing, the Google AI Pro and Google AI Ultra plans offer different functionalities suitable for various user needs.
Video Production
43.1K
NoCode
Nocode
NoCode is a platform that requires no programming experience, allowing users to quickly generate applications by describing their ideas in natural language, aiming to lower development barriers so more people can realize their ideas. The platform provides real-time previews and one-click deployment features, making it very suitable for non-technical users to turn their ideas into reality.
Development Platform
46.1K
ListenHub
Listenhub
ListenHub is a lightweight AI podcast generation tool that supports both Chinese and English. Based on cutting-edge AI technology, it can quickly generate podcast content of interest to users. Its main advantages include natural dialogue and ultra-realistic voice effects, allowing users to enjoy high-quality auditory experiences anytime and anywhere. ListenHub not only improves the speed of content generation but also offers compatibility with mobile devices, making it convenient for users to use in different settings. The product is positioned as an efficient information acquisition tool, suitable for the needs of a wide range of listeners.
AI
43.6K
MiniMax Agent
Minimax Agent
MiniMax Agent is an intelligent AI companion that adopts the latest multimodal technology. The MCP multi-agent collaboration enables AI teams to efficiently solve complex problems. It provides features such as instant answers, visual analysis, and voice interaction, which can increase productivity by 10 times.
Multimodal technology
45.3K
Chinese Picks
Tencent Hunyuan Image 2.0
Tencent Hunyuan Image 2.0
Tencent Hunyuan Image 2.0 is Tencent's latest released AI image generation model, significantly improving generation speed and image quality. With a super-high compression ratio codec and new diffusion architecture, image generation speed can reach milliseconds, avoiding the waiting time of traditional generation. At the same time, the model improves the realism and detail representation of images through the combination of reinforcement learning algorithms and human aesthetic knowledge, suitable for professional users such as designers and creators.
Image Generation
44.2K
OpenMemory MCP
Openmemory MCP
OpenMemory is an open-source personal memory layer that provides private, portable memory management for large language models (LLMs). It ensures users have full control over their data, maintaining its security when building AI applications. This project supports Docker, Python, and Node.js, making it suitable for developers seeking personalized AI experiences. OpenMemory is particularly suited for users who wish to use AI without revealing personal information.
open source
43.9K
FastVLM
Fastvlm
FastVLM is an efficient visual encoding model designed specifically for visual language models. It uses the innovative FastViTHD hybrid visual encoder to reduce the time required for encoding high-resolution images and the number of output tokens, resulting in excellent performance in both speed and accuracy. FastVLM is primarily positioned to provide developers with powerful visual language processing capabilities, applicable to various scenarios, particularly performing excellently on mobile devices that require rapid response.
Image Processing
42.0K
Chinese Picks
LiblibAI
Liblibai
LiblibAI is a leading Chinese AI creative platform offering powerful AI creative tools to help creators bring their imagination to life. The platform provides a vast library of free AI creative models, allowing users to search and utilize these models for image, text, and audio creations. Users can also train their own AI models on the platform. Focused on the diverse needs of creators, LiblibAI is committed to creating inclusive conditions and serving the creative industry, ensuring that everyone can enjoy the joy of creation.
AI Model
6.9M
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase