# GMNet
**Repository Path**: chunfengshi/GMNet
## Basic Information
- **Project Name**: GMNet
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-08-28
- **Last Updated**: 2025-08-28
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
[ICLR2025] Learning Gain Map for Inverse Tone Mapping
Yinuo Liao, Yuanshen Guan, Ruikang Xu, Jiacheng Li, Shida Sun, Zhiwei Xiong*
[Paper Link]
[Datasets]
[Codes]
[Scripts]
[Contact]
```tex
@inproceedings{Liao_2025_ICLR,
title = {Learning Gain Map for Inverse Tone Mapping},
author = {Yinuo Liao and Yuanshen Guan and Ruikang Xu and Jiacheng Li and Shida Sun and Zhiwei Xiong},
booktitle = {The Thirteenth International Conference on Learning Representations},
month = {April},
year = {2025}
}
```
1. Datasets
We provide **Synthetic Dataset** and **Real-world Dataset**, which are organized by following four parts:
- `image`: The input SDR Images
- `gainmap`: The Gourd-Turth Gain Maps
- `metadata`: The metadata for restore HDR form SDR-GM pair (Only Qmax here)
- `thumbnail`: The down-sampled SDR Images in resolution `256×256` (Bicubic interpolation)
The data structure in dataset will be like:
```
synthetic_dataset
├── train
| ├── image
| | └── *.png
| ├── gainmap
| | └── *.png
| ├── metadata
| | └── *.npy
| └── thumbnail
| └── *.png
└── test
├── image
| └── *.png
├── gainmap
| └── *.png
├── metadata
| └── *.npy
└── thumbnail
└── *.png
```
and more information can be found in the paper and the table below:
| | Synthetic Dataset | Real-world Dataset |
| :---------: | :-------------------------: | :-------------------------: |
| Source | HDR video frames | taken photos |
| Volume | ㅤㅤㅤ900 trainset & 100 testsetㅤㅤㅤ | ㅤㅤㅤ900 trainset & 100 testsetㅤㅤㅤ |
| SDR White Level | 100 nit | 203 nit |
| HDR Peak Level | 800 nit | 1015 nit |
| ㅤㅤQmax Rangeㅤㅤ | [0, 3] ([0, log8]) | [0, 2.32] ([0,log5]) |
| Input SDR Image | 3840×2160 8bit RGB | 4096×3072 8bit RGB |
| Gourd-turth Gain Map | 3840×2160 8bit Gray | 2048×1536 8bit Gray |
| ㅤㅤDownload Linkㅤㅤ | [[BaiduNetDisk]](https://pan.baidu.com/s/1I3gzQN-MqP62FMnlyhzdqA?pwd=1958) [[OneDrive]](https://stuxidianeducn-my.sharepoint.com/:f:/g/personal/ynliao_stu_xidian_edu_cn/Etsw2xb9LJBIgDsJ2lbh9CsBofhFihU0q7HbyM3Hm6P2_Q?e=hRccCg) | [[BaiduNetDisk]](https://pan.baidu.com/s/1I3gzQN-MqP62FMnlyhzdqA?pwd=1958) [[OneDrive]](https://stuxidianeducn-my.sharepoint.com/:f:/g/personal/ynliao_stu_xidian_edu_cn/Etsw2xb9LJBIgDsJ2lbh9CsBofhFihU0q7HbyM3Hm6P2_Q?e=hRccCg) |
2. Codes
### 2.1 How to test
Please download our dataset first, then modify the `dataroot` in `./codes/options/config/test_syn.yml` to the path you store dataset, and you can modify `pretrain_model_G` to choose the pretrained model. When the configuration in `test_syn.yml` is ready, you can run the conmand:
```
cd codes
python test.py -opt options/config/test_syn.yml
```
The test results will be saved to `./results/test_name`.
### 2.2 How to train
To facilitate the training process, please modify the data path in `crop_training_patch.py` in [Scripts] and run it to crop the image and gainmap to patch:
```
cd scripts
python crop_training_patch.py --input_folder ../Synthetic_dataset/train/image --save_folder ../Synthetic_dataset/train/image_sp --n_thread 20 --crop_sz 480 --step 480
python crop_training_patch.py --input_folder ../Synthetic_dataset/train/gainmap --save_folder ../Synthetic_dataset/train/gainmap_sp --n_thread 20 --crop_sz 480 --step 480
```
It will generate pathes of `image` to `image_sp` folder, and the pathes of `gainmap` to `gainmap_sp` folder. After that, please modify the `dataroot` in `./codes/options/config/train_syn.yml` to the sub-folder, then tun:
```
cd codes
python train.py -opt options/config/train_syn.yml
```
The checkpoints and training states can be found `./experiments/train_name`.
3. Scripts
We provide several practical scripts in `./scripts` and the details are as following:
- `crop_training_patch.py`: This script crops the images to patches for training. (from [HDRTVNet](https://github.com/chxy95/HDRTVNet))
- `extract_double_layer_hdr.py`: This script extracts sdr, gainmap and qmax from double-layer HDR file.
- `render_sdr_gm_to_linear_hdr.py`: This script restores linear HDR from sdr, gainmap and qmax.
- `pq_visualize.py`: This script converts the linear HDR to PQ-OETF encoded HDR image for visualization.
4. Contact
If you have any questions, please describe them in issues or contact yinuoliao@mail.ustc.edu.cn
5. Acknowledgment
We appreciate the following github repositories for their valuable work:
- BasicSR: https://github.com/xinntao/BasicSR
- HDRSample: https://github.com/JonaNorman/HDRSample
- HDR Toys: https://github.com/natural-harmonia-gropius/hdr-toys