

Color Diffusion
Overview :
Color-diffusion is an image coloring project based on diffusion models that utilizes the LAB color space to colorize black and white images. The main advantage of this project lies in its ability to use existing grayscale information (L channel) to predict color information (A and B channels) through model training. This technique is significant in the field of image processing, especially in restoring old photographs and artistic creation. As an open-source project, Color-diffusion was quickly developed by the author to satisfy curiosity and gain experience in training a diffusion model from scratch. The project is currently free and has considerable room for improvement.
Target Users :
The target audience includes researchers and developers in the field of image processing, as well as artists and photographers interested in colorizing black and white photographs. Color-diffusion is suitable for them as it provides an open-source tool to experiment with and apply the latest image coloring techniques, facilitating innovation in areas like image restoration and artistic creation.
Use Cases
Restoring old photographs: Colorizing aged black and white photographs with Color-diffusion to revive their original colors.
Artistic creation: Artists can use Color-diffusion to add color to their black and white works, creating new artistic effects.
Educational use: In image processing and computer vision courses, Color-diffusion can serve as a teaching tool to help students understand image coloring techniques.
Features
Colorizing images using the LAB color space
Adding noise only to the color channels during model training while keeping the brightness channel unaffected
Using UNet architecture for noise prediction
Combining features of grayscale images with those of the denoising UNet during training
Supporting command-line tools and a simple Gradio web UI for image coloring
Providing a non-Markovian forward diffusion process for image coloring
How to Use
1. Run `bash download_dataset.sh` to download and extract the CelebA dataset.
2. Use `inference.py` for command-line coloring: `python inference.py --image-path <IMG_PATH> --checkpoint <CKPT_PATH> --output <OUTPUT_PATH>`.
3. Alternatively, run `python app.py` to launch a simple Gradio web UI for image coloring.
4. In the web UI, upload a black and white image, select the model checkpoint, and click the colorization button.
5. Wait for the model to process the image, then download or view the colorized result.
6. You can adjust the model parameters for better coloring effects.
Featured AI Tools
English Picks

Pic Copilot
Pic Copilot is an AI-driven image optimization tool for e-commerce that leverages image generation models. Through training with a large volume of image click-through data, it effectively improves the click-through conversion rate of images, thereby optimizing e-commerce marketing results. Its key advantage is the enhancement of the click-through conversion rate, leading to an improved e-commerce marketing performance. It is the result of data training by the Alibaba team and can significantly optimize the click-through performance of images.
Image Editing
5.3M

Font Identifier
Font Identifier is an online tool that can identify the font from any image. It utilizes advanced artificial intelligence technology to accurately identify the corresponding font in 90% of cases. Users only need to upload a clear image containing the desired font, the system will automatically separate the letters, and provide 60+ similar fonts for users to choose from. Font Identifier supports both commercial and free fonts, and provides download or purchase links.
Image Editing
2.2M