GitXplorerGitXplorer
m

cv-denoising-encoder

public
0 stars
0 forks
0 issues

Commits

List of commits on branch main.
Unverified
c4b5801ba2ab64043c695c3f6ef90355d554b106

Update of the slides

mmikcnt committed 4 years ago
Unverified
cf7bf51582fa269e873a37bc9e5b4bffac82f158

Added slides

mmikcnt committed 4 years ago
Unverified
4bb5df01641a7744a598c3a89fc34173bd55f757

Removed useless file

mmikcnt committed 4 years ago
Unverified
350c291ca01c291c34df97a3ef20e3b5c3f7bd56

Update of the readme

mmikcnt committed 4 years ago
Unverified
f97d79f276ab27d34365572f6aaf87924fc9a22b

Readme update

mmikcnt committed 4 years ago
Unverified
1fc509b2f5167940d062433e110a42c8bd67b65c

Update of the readme

mmikcnt committed 4 years ago

README

The README file for this repository.

Denoising of path traced images using Deep Learning

Project for the Computer Vision course, Sapienza University.

The aim of this project is to build a Pytorch model able to denoise images. We're particularly interested in the noise produced by path-tracing (this is a cool video from Disney explaining this process, if you've never come across those words!).

One important part of this project was the investigation of the rendering noise. Is there a way to algorithmically recrate this noise? Great question. We've made use of simple Gaussian noise along with a revisited version of salt and pepper noise to solve this task.

Usage

Dependencies

In the repository, it is included requirements.txt, which consists in a file containing the list of items to be installed using conda, like so:

conda install --file requirements.txt

Once the requirements are installed, you shouldn't have any problem when executing the scripts. Consider also creating a new environment, so that you don't have to worry about what is really needed and what not after you're done with this project. With conda, that's easily done with the following command:

conda create --name <env> --file requirements.txt

where you have to replace <env> with the name you want to give to the new environment.

Data structure šŸ—„ļø

To train the model from scratch, it is mandatory to have a data directory in which the files are organized as follows:

ā”œā”€ā”€ train
ā”‚   ā”œā”€ā”€ 1.jpg
ā”‚   ā”œā”€ā”€ ...
ā”‚   ā”œā”€ā”€ 2.jpg
ā”‚   ā””ā”€ā”€ 3.jpg
ā””ā”€ā”€ test
    ā”œā”€ā”€ 7.jpg
    ā”œā”€ā”€ ...
    ā”œā”€ā”€ 8.jpg
    ā””ā”€ā”€ 9.jpg

Training šŸ‹ļø

Once you have the files well organized, you can start the training directly from command line. For example, to select batch size of 8, training and testing set in the directory data, run:

$ python main.py --batch_size 8 --data_path data

Apart from this super simple example, there are quite a few parameters that can be set. It is possible to resume last checkpoint, use drive as storage (e.g., if training on Colab), etc.

$ python main.py --h
usage: main.py [-h] [--model_checkpoint MODEL_CHECKPOINT] [--resume_last]
               [--batch_size BATCH_SIZE] [--epochs EPOCHS]
               [--learning_rate LEARNING_RATE] [--data_path DATA_PATH]
               [--use_drive]

Arguments parser

optional arguments:
  -h, --help            show this help message and exit
  --model_checkpoint MODEL_CHECKPOINT
                        path to .pth file checkpoint of the model (default:
                        none)
  --resume_last         use this flag to resume the last checkpoint of the
                        model
  --batch_size BATCH_SIZE
                        batch size (default: 8)
  --epochs EPOCHS       number of epochs (default: 500)
  --learning_rate LEARNING_RATE
                        learning rate (default 0.1)
  --data_path DATA_PATH
                        dataset path
  --use_drive           use this flag to save checkpoint on drive

What about GANs? šŸ”«

This repository is strictly related to its GAN twin, where there is some (very similar) code to train the model using a Generative Adversarial Network. Go check it out, if you're interested!