GitXplorerGitXplorer
m

tf-slim-mnist

public
70 stars
30 forks
0 issues

Commits

List of commits on branch master.
Verified
b79088f73b673d9bc0e12b8d9494da0781669a64

update on deprecation and new repo

mmnoukhov committed 7 years ago
Verified
dfaf86dd531f738d59be36dbd6c53b48c87f177d

Fixed README link

mmnoukhov committed 7 years ago
Verified
f6899e38092fc3d173ce6ba962be3864edaadb1d

Merge pull request #3 from charlesreid1/master

mmnoukhov committed 7 years ago
Unverified
229d387531a78ca8fc7872a09badf90baf99ce0c

Updating links, fixing some grammar/typos.

ccharlesreid1 committed 7 years ago
Unverified
4171ce4233da25324835ad768fa9a22e934c5040

updated README for new code

committed 8 years ago
Unverified
a5f13545aeb21f0fdbd4fb0b28b66664905c06ce

removed unnecessary files and args

committed 8 years ago

README

The README file for this repository.

DEPRECATED: it seems that a lot of tf.slim functionality is being put into other places in Tensorflow and it isn't really being maintained that much anymore 😞 So I've created another repo focusing on tf.Estimator as the high-level library/abstraction. It is even smaller of a repo and I'm trying to make it as simple as possible to follow, so check it out! https://github.com/mnoukhov/tf-estimator-mnist

tf-slim-mnist

MNIST tutorial with Tensorflow Slim (tf.contrib.slim) a lightweight library over Tensorflow, you can read more about it here. Here is a good iPython notebook about it.

Setting up data

Run python datasets/download_and_convert_mnist.py to create {train, test}.tfrecords files containing MNIST data by default (unless you specify --directory) they will be put into /tmp/mnist

Running

Run the training, validation, and tensorboard concurrently. The results of the training and validation should show up in tensorboard.

Running the training

run mnist_train.py which will read train.tfrecords using an input queue and output its model checkpoints, and summaries to the log directory (you can specify it with --log_dir)

Running the validation

Run mnist_eval.py which will read test.tfrecords using an input queue, and also read the train models checkpoints from log/train (by default). It will then load the model at that checkpoint and run it on the testing examples, outputting the summaries and log to its own folder log/eval (you can specify it with --log_dir)

Running TensorBoard

TensorBoard allows you to keep track of your training in a nice and visual way. It will read the logs from the training and validation and should update on its own though you may have to refresh the page manually sometimes.

Make sure both training and validation output their summaries to one log directory and preferably under their own folder. Run tensorboard --logdir=log (replace log with your own log folder if you changed it).

If each process has its own folder then train and validation should have their own colour and checkbox

Notes

Woah, data input seems pretty different from what it used to be

TensorFlow has really changed the way they're doing data input (for the better!) and though the new way seems pretty complicated (with queue runners etc...) it isn't that bad and can potentially make everything much {faster,better}.

I'm trying to keep up with all the changes but if something seems off to you, then please open an issue or create a pull request!

Where did you get all those files in /dataset ?

I took those files from the tensorflow/models repo in the TensorFlow slim folder here. I modified download_and_convert_mnist.py just a little so it can be run as a standalone program, and took only the files you need to run a LeNet architecture for the mnist dataset.

How do I do more than MNIST?

Modify the model file with whatever model you want, change the data input (maybe look at the datasets already available in slim).