Teachable Machine
T
Teachable Machine
Overview :
Teachable Machine is a web-based tool that allows users to quickly and easily create machine learning models without the need for specialized knowledge or coding skills. Users simply collect and organize sample data, and Teachable Machine will automatically train the model. After testing the model's accuracy, users can then export it for usage.
Target Users :
["Website Interaction","Mobile App","Internet of Things"]
Total Visits: 378.5K
Top Region: US(16.19%)
Website Views : 145.5K
Use Cases
Identify fruit ripeness
Recognize gestures
Control Arduino
Features
Supports three input methods: images, sound, and gesture
Based on TensorFlow.js, which can be run in environments that support JavaScript
Offers a variety of tutorial examples
Supports export in formats like TensorFlow and Arduino
Featured AI Tools
Teachable Machine
Teachable Machine
Teachable Machine is a web-based tool that allows users to quickly and easily create machine learning models without the need for specialized knowledge or coding skills. Users simply collect and organize sample data, and Teachable Machine will automatically train the model. After testing the model's accuracy, users can then export it for usage.
AI model training and inference
145.5K
OpenDiT
Opendit
OpenDiT is an open-source project providing a high-performance implementation of Diffusion Transformer (DiT) based on Colossal-AI. It is designed to enhance the training and inference efficiency of DiT applications, including text-to-video and text-to-image generation. OpenDiT achieves performance improvements through the following technologies: * GPU acceleration up to 80% and 50% memory reduction; * Core optimizations including FlashAttention, Fused AdaLN, and Fused layernorm; * Mixed parallelism methods such as ZeRO, Gemini, and DDP, along with model sharding for ema models to further reduce memory costs; * FastSeq: A novel sequence parallelism method particularly suitable for workloads like DiT, where activations are large but parameters are small. Single-node sequence parallelism can save up to 48% in communication costs and break through the memory limit of a single GPU, reducing overall training and inference time; * Significant performance improvements can be achieved with minimal code modifications; * Users do not need to understand the implementation details of distributed training; * Complete text-to-image and text-to-video generation workflows; * Researchers and engineers can easily use and adapt our workflows to real-world applications without modifying the parallelism part; * Training on ImageNet for text-to-image generation and releasing checkpoints.
AI model training and inference
131.4K
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase