This library allows reading and writing TFRecord files efficiently in Python, and provides an IterableDataset
interface for TFRecord files in PyTorch. Both uncompressed and compressed gzip TFRecord are supported.
This library is modified from tfrecord
, to remove its binding to tf.Example
and support generic TFRecord data.
pip install tfrecord-dataset
import tfrecord_dataset as tfr
writer = tfr.TFRecordWriter('test.tfrecord')
writer.write(b'Hello world!')
writer.write(b'This is a test.')
writer.close()
for x in tfr.tfrecord_iterator('test.tfrecord'):
print(bytes(x))
Use TFRecordDataset
to read TFRecord files in PyTorch.
import torch
from tfrecord_dataset.torch import TFRecordDataset
dataset = TFRecordDataset('test.tfrecord', transform=lambda x: len(x))
loader = torch.utils.data.DataLoader(dataset, batch_size=2)
data = next(iter(loader))
print(data)
The following TFRecordDataset
reads TFRecord data from 8 files in parallel. The name of these 8 files match pattern data-0000?-of-00008
.
dataset = TFRecordDataset(data@8', transform=lambda x: len(x))
The reader reads TFRecord payload as bytes. You can pass a callable as the
transform
argument for parsing the bytes into the desired format, as
shown in the simple example above. You can use such transformation for parsing
serialized structured data, e.g. protobuf, numpy arrays, images, etc.
Here is another example for reading and decoding images:
import cv2
dataset = TFRecordDataset(
'data.tfrecord',
transform=lambda x: {'image': cv2.imdecode(x, cv2.IMREAD_COLOR)})
TFRecordDataset
automatically shuffles the data with two mechanisms:
-
It reads data into a buffer, and randomly yield data from this buffer. Setting to buffer to a larger size (
buffer_size
) produces better randomness. -
For sharded TFRecords, it reads multiple files in parallel. Setting
file_parallelism
to a larger number also produces better randomness.
Index files are deprecated since v0.2.0. It's no longer required.
Such index files can be generated with:
python -m tfrecord_dataset.tools.tfrecord2idx <tfrecord path> <index path>
By default, TFRecordDataset
is infinite, meaning that it samples the data forever. You can make it finite by setting num_epochs
.
dataset = TFRecordDataset(..., num_epochs=2)
This repo is forked from https://github.com/vahidk/tfrecord.