VideoColorGrading
VideoColorGrading copied to clipboard
[ICCV 2025] Video Color Grading via Look-Up Table Generation
Video Color Grading via Look-Up Table Generation
Seunghyun Shin1, Dongmin Shin2, Jisu Shin1 , Hae-Gon Jeon2 †, Joon-Young Lee3 †,
1GIST 2Yonsei University 3Adobe Research
ICCV 2025
📝 Introduction
We present a reference-based video color grading framework. Our key idea is to generate a look-up table (LUT) for color attribute alignment between reference scenes and input video via a diffusion model.
If you find Video Color Grading useful, please help ⭐ this repo, which is important to Open-Source projects. Thanks!
-
[26/06/2025]🎉🎉🎉 Video Color Grading is accepted by ICCV 2025.
🚀 Quick Start
Installation
- Clone the repository:
git clone https://github.com/seunghyuns98/VideoColorGrading.git
- Install dependencies:
- Directly Generate conda with our bash script:
source fast_env.sh
- Or Install manually(please refer to fast_env.sh)
- Download the pretrained model weights from Google Drive and place them in the 'pretrained/' directory.
Inference
Run inference code using the provided example reference image and input video.
If you placed pretrained models in a directory other than 'pretrained/', make sure to update its path in configs/prompts/video_demo.yaml.
python video_demo.py \
--ref_path examples/reference1.jpg \
--input_path examples/video1.mp4 \
--save_path output/example1.mp4
🏋️♂️ Train Your Own Model
Training consists of two steps:
- Training GS-Extractor
- Training L-Diffuser
Before training, make sure to update the config files with your environment:
video_folder : PATH To Your Video Dataset
lut_folder: PATH To Your LUT Dataset
step1_checkpoint_path: PATH To Your Pretrained Step1 Model
etc.
You can see config files at confgis folder
Furthermore, change your lut path in your dataloader
📁 Dataset Preparation
We use the Condensed Movie Dataset which consists of over 33,000 clips from 3,000 movies covering the salient parts of the films and has two-minutes running time for each clip in average.
100 LUT bases which are selected as distinctive LUTs from the $400$ LUTs of the Video Harmonization Dataset.
You can download them through: Google Drive
They are originally from: Condensed Movie Dataset & Video Harmonization Dataset
We recommend dividing the video into frames using the script below.
Please make sure to change the video_path variable to the location where your dataset is stored.
bash video2frame.sh
🔧 Training Phase 1
torchrun --nnodes=1 --nproc_per_node=8 train_step1.py --config configs/training/train_stage_1.yaml
🔧 Training Phase 2
torchrun --nnodes=1 --nproc_per_node=8 train_step2.py --config configs/training/train_stage_2.yaml
📊 Evaluation
You can evaluate performance by running:
python eval.py \
--video_path <PATH_TO_YOUR_VIDEO_DATASET> \
--lut_path <PATH_TO_YOUR_LUTs> \
--save_path <PATH_TO_SAVE_RESULTS>
🤝 Contributing
- Welcome to open issues and pull requests.
- Welcome to optimize the inference speed and memory usage, e.g., through model quantization, distillation, or other acceleration techniques.
❤️ Acknowledgement
We have used codes from other great research work, including Animate-Anyone, GeometryCrafter. We sincerely thank the authors for their awesome work!
📜 Citation
If you find this work helpful, please consider citing:
@article{shin2025video,
title={Video Color Grading via Look-Up Table Generation},
author={Shin, Seunghyun and Shin, Dongmin and Shin, Jisu and Jeon, Hae-Gon and Lee, Joon-Young},
journal={arXiv preprint arXiv:2508.00548},
year={2025}
}