GitXplorerGitXplorer
X

pytorch-seq2seq

public
3 stars
0 forks
0 issues

Commits

List of commits on branch main.
Verified
8f37d1b79e8f37ec5729e243fa800e30a596aa3b

Update README.md

XXrenya committed 2 years ago
Verified
00de4209ed6fe70e3342ea41f0d811d4481889bf

Add files via upload

XXrenya committed 2 years ago
Verified
73b0acca59312d421d0ba3fc5bde79dbaa60034a

Add files via upload

XXrenya committed 2 years ago
Verified
a09ff754e06c898e46da96903b58c1f74128e84e

Update README.md

XXrenya committed 2 years ago
Verified
56bd7940eb2f278e2d5d434d0b0e69019c6c7687

Add files via upload

XXrenya committed 2 years ago
Verified
a168a1a7537e0824b1dcd10e0c027d30ae662be0

Update README.md

XXrenya committed 2 years ago

README

The README file for this repository.

PyTorch Seq2Seq

The main difference from original repo is updated torchtext>=0.12.0 with some additional functions.

Tutorials

  • 1 - Sequence to Sequence Learning with Neural Networks Open In Colab

    This first tutorial covers the workflow of a PyTorch with torchtext seq2seq project. We'll cover the basics of seq2seq networks using encoder-decoder models, how to implement these models in PyTorch, and how to use torchtext to do all of the heavy lifting with regards to text processing. The model itself will be based off an implementation of Sequence to Sequence Learning with Neural Networks, which uses multi-layer LSTMs.

  • 2 - Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation Open In Colab

    Now we have the basic workflow covered, this tutorial will focus on improving our results. Building on our knowledge of PyTorch and torchtext gained from the previous tutorial, we'll cover a second second model, which helps with the information compression problem faced by encoder-decoder models. This model will be based off an implementation of Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation, which uses GRUs.

  • 3 - Neural Machine Translation by Jointly Learning to Align and Translate Open In Colab

    Next, we learn about attention by implementing Neural Machine Translation by Jointly Learning to Align and Translate. This further allievates the information compression problem by allowing the decoder to "look back" at the input sentence by creating context vectors that are weighted sums of the encoder hidden states. The weights for this weighted sum are calculated via an attention mechanism, where the decoder learns to pay attention to the most relevant words in the input sentence.

  • 4 - Packed Padded Sequences, Masking, Inference and BLEU Open In Colab

    In this notebook, we will improve the previous model architecture by adding packed padded sequences and masking. These are two methods commonly used in NLP. Packed padded sequences allow us to only process the non-padded elements of our input sentence with our RNN. Masking is used to force the model to ignore certain elements we do not want it to look at, such as attention over padded elements. Together, these give us a small performance boost. We also cover a very basic way of using the model for inference, allowing us to get translations for any sentence we want to give to the model and how we can view the attention values over the source sequence for those translations. Finally, we show how to calculate the BLEU metric from our translations.

  • 5 - Convolutional Sequence to Sequence Learning Open In Colab

    We finally move away from RNN based models and implement a fully convolutional model. One of the downsides of RNNs is that they are sequential. That is, before a word is processed by the RNN, all previous words must also be processed. Convolutional models can be fully parallelized, which allow them to be trained much quicker. We will be implementing the Convolutional Sequence to Sequence model, which uses multiple convolutional layers in both the encoder and decoder, with an attention mechanism between them.

  • 6 - Attention Is All You Need Open In Colab

    Continuing with the non-RNN based models, we implement the Transformer model from Attention Is All You Need. This model is based soley on attention mechanisms and introduces Multi-Head Attention. The encoder and decoder are made of multiple layers, with each layer consisting of Multi-Head Attention and Positionwise Feedforward sublayers. This model is currently used in many state-of-the-art sequence-to-sequence and transfer learning tasks.

Reference

  1. https://github.com/bentrevett/pytorch-seq2seq
  2. https://github.com/spro/practical-pytorch
  3. https://github.com/keon/seq2seq
  4. https://github.com/pengshuang/CNN-Seq2Seq
  5. https://github.com/pytorch/fairseq
  6. https://github.com/jadore801120/attention-is-all-you-need-pytorch
  7. http://nlp.seas.harvard.edu/2018/04/03/attention.html
  8. https://www.analyticsvidhya.com/blog/2019/06/understanding-transformers-nlp-state-of-the-art-models/