GitXplorerGitXplorer
s

Faster_RCNN

public
3 stars
2 forks
1 issues

Commits

List of commits on branch master.
Unverified
8203fdef2f4c30f0402ac9280f8d608f48655fe3

update

committed 5 years ago
Unverified
90053b3034f245b13555f6706293df91aeb03ba8

update license information

ssymphonylyh committed 5 years ago
Unverified
32586656e413dc456a01a9f5bc1a7fd063ddff0b

update train

committed 5 years ago
Unverified
7b7ef714aa6cf182bfca3bdd2c5530d7cedd3719

gitignore

committed 5 years ago
Unverified
1315fa7b6c3eb74b6be4675064b923d99b9fcdce

structure

committed 5 years ago
Unverified
2b458a8a60a3b42b1c3c01a3a2746cd8d4f5fcb0

resnet embedded

committed 5 years ago

README

The README file for this repository.

Faster RCNN

This project implements Faster RCNN proposed in Ren et al. (2015). The code is completely PyTorch. Every single line of the code is written from scratch (except for the PyTorch's built-in ResNet). The programming style is different from the authors' implementation and other existing implementations, in the sense of improved efficiency and readability.

Installation Guide

Clone the repository

git clone https://github.com/symphonylyh/Faster_RCNN.git

Pre-trained ResNet101 and PASCAL VOC 2007 Dataset will be automatically downloaded.

Train the network or resume training from last saved model.

python train.py or python train.py --resume

Model checkpoint and training statistics will be saved in /logs. Tensorboard can be used to visualize the training process

tensorboard --logdir=logs

PyTorch Programming Notes

This is my first experience implementing a non-trivial model architecture, so here are some notes/practice for PyTorch programming

  • Within layers, we may need to declare new variables. I was using like newTensor = torch.zeros(N,M). It works for CPU execution, however, when running on GPU, it will complain that newTensor is on CPU device thus is not compatible with other GPU variables. To fix this, we can use newTensor = torch.zeros(N,M).to(oldTensor.device) given that oldTensor is a variable on the device we want to move to. This will allow simple inheritance of device info instead of keeping a global variable specifying the correct device.