GitXplorerGitXplorer
s

word2vec

public
0 stars
0 forks
28 issues

Commits

List of commits on branch master.
Unverified
8a4ffb58c53d163ce7c582b6e665f7df2faec76b

bugfix in InitUnigramTable() - some words could not have been sampled as negative examples

committed 10 years ago
Unverified
866f02f15bc176b4ee472dbf507d79d90cc2dcaf

Bug fix - added questions-phrases.txt

committed 10 years ago
Unverified
eaa00c18f9a194d1a7bb24926b954165ef376249

added script for training big word vector model using public corpora

committed 10 years ago
Unverified
8c70ff76e10db59314686da627f2ed38bf0faecd

fixed minor bugs

committed 10 years ago
Unverified
3cf112ce3383ab9b51132ecafcc45a184916a5b0

update to 0.1c version

committed 10 years ago
Unverified
c7b5fa0cabb59e28fe1e09b84b286a49314aeab6

and fixed the minimal cosine similarity to be -1 instead of 0

committed 11 years ago

README

The README file for this repository.

Tools for computing distributed representtion of words

We provide an implementation of the Continuous Bag-of-Words (CBOW) and the Skip-gram model (SG), as well as several demo scripts.

Given a text corpus, the word2vec tool learns a vector for every word in the vocabulary using the Continuous Bag-of-Words or the Skip-Gram neural network architectures. The user should to specify the following:

  • desired vector dimensionality
  • the size of the context window for either the Skip-Gram or the Continuous Bag-of-Words model
  • training algorithm: hierarchical softmax and / or negative sampling
  • threshold for downsampling the frequent words
  • number of threads to use
  • the format of the output word vector file (text or binary)

Usually, the other hyper-parameters such as the learning rate do not need to be tuned for different training sets.

The script demo-word.sh downloads a small (100MB) text corpus from the web, and trains a small word vector model. After the training is finished, the user can interactively explore the similarity of the words.

More information about the scripts is provided at https://code.google.com/p/word2vec/