Color-diffusion
C
Color Diffusion
Overview :
Color-diffusion is an image coloring project based on diffusion models that utilizes the LAB color space to colorize black and white images. The main advantage of this project lies in its ability to use existing grayscale information (L channel) to predict color information (A and B channels) through model training. This technique is significant in the field of image processing, especially in restoring old photographs and artistic creation. As an open-source project, Color-diffusion was quickly developed by the author to satisfy curiosity and gain experience in training a diffusion model from scratch. The project is currently free and has considerable room for improvement.
Target Users :
The target audience includes researchers and developers in the field of image processing, as well as artists and photographers interested in colorizing black and white photographs. Color-diffusion is suitable for them as it provides an open-source tool to experiment with and apply the latest image coloring techniques, facilitating innovation in areas like image restoration and artistic creation.
Total Visits: 474.6M
Top Region: US(19.34%)
Website Views : 52.2K
Use Cases
Restoring old photographs: Colorizing aged black and white photographs with Color-diffusion to revive their original colors.
Artistic creation: Artists can use Color-diffusion to add color to their black and white works, creating new artistic effects.
Educational use: In image processing and computer vision courses, Color-diffusion can serve as a teaching tool to help students understand image coloring techniques.
Features
Colorizing images using the LAB color space
Adding noise only to the color channels during model training while keeping the brightness channel unaffected
Using UNet architecture for noise prediction
Combining features of grayscale images with those of the denoising UNet during training
Supporting command-line tools and a simple Gradio web UI for image coloring
Providing a non-Markovian forward diffusion process for image coloring
How to Use
1. Run `bash download_dataset.sh` to download and extract the CelebA dataset.
2. Use `inference.py` for command-line coloring: `python inference.py --image-path <IMG_PATH> --checkpoint <CKPT_PATH> --output <OUTPUT_PATH>`.
3. Alternatively, run `python app.py` to launch a simple Gradio web UI for image coloring.
4. In the web UI, upload a black and white image, select the model checkpoint, and click the colorization button.
5. Wait for the model to process the image, then download or view the colorized result.
6. You can adjust the model parameters for better coloring effects.
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase