GitXplorerGitXplorer
k

svmjs

public
668 stars
148 forks
3 issues

Commits

List of commits on branch master.
Unverified
b75b71289dd81fc909a5b3fb8b1caf20fbe45121

Merge pull request #6 from harthur/memoize

kkarpathy committed 12 years ago
Unverified
6a0eded7d3f5074795eb30f01732793cc88308cf

add 'memoize' option to cache kernel computations

hharthur committed 12 years ago
Unverified
f480d05ed2a4286f53370044ca1f9f1d00f660aa

Big release! fromJSON and toJSON now work. Quite substantial efficiency improvements: the non-support vectors are pruned during training. Also, for linear SVM the weights are automatically computed and used which should be much faster than before. Slight API changes to train(), but backwards compatible.

kkarpathy committed 12 years ago
Unverified
89533ca18423e4903f97e5e938046bec947d07c5

tiny tweaks, fixed bug in docs

kkarpathy committed 12 years ago
Unverified
88ae38ee9bdc93764a3e8146a6450e3db922cc08

Merge pull request #2 from harthur/npm-package

kkarpathy committed 12 years ago
Unverified
681dad0eb84c989b79d1ab7ec5b030e966e60d98

add note about node.js usage to readme

hharthur committed 12 years ago

README

The README file for this repository.

svmjs

Andrej Karpathy July 2012

svmjs is a lightweight implementation of the SMO algorithm to train a binary Support Vector Machine. As this uses the dual formulation, it also supports arbitrary kernels. Correctness test, together with MATLAB reference code are in /test.

Online GUI demo

Can be found here: http://cs.stanford.edu/~karpathy/svmjs/demo/

Corresponding code is inside /demo directory.

Usage

The simplest use case:

// include the library
<script src="./svmjs/lib/svm.js"></script>
<script>
data = [[0,0], [0,1], [1,0], [1,1]];
labels = [-1, 1, 1, -1];
svm = new svmjs.SVM();
svm.train(data, labels, {C: 1.0}); // C is a parameter to SVM
testlabels = svm.predict(testdata);
</script>

Here, data and testdata are a 2D, NxD array of floats, labels and testlabels is an array of size N that contains 1 or -1. You can also query for the raw margins:

margins = svm.margins(testdata);
margin = svm.marginOne(testadata[0]);

The library supports arbitrary kernels, but currently comes with linear and rbf kernel:

svm.train(data, labels, { kernel: function(v1,v2){ /* return K(v1, v2) */} }); // arbitrary function
svm.train(data, labels, { kernel: 'linear' });
svm.train(data, labels, { kernel: 'rbf', rbfsigma: 0.5 }); // sigma in the gaussian kernel = 0.5

For training you can pass in several options. Here are the defaults:

var options = {};
/* For C, Higher = you trust your data more. Lower = more regularization.
Should be in range of around 1e-2 ... 1e5 at most. */
options.C = 1.0;
options.tol = 1e-4; // do not touch this unless you're pro
options.alphatol = 1e-7; // used for pruning non-support vectors. do not touch unless you're pro
options.maxiter = 10000; // if you have a larger problem, you may need to increase this
options.kernel = svmjs.linearKernel; // discussed above
options.numpasses = 10; // increase this for higher precision of the result. (but slower)
svm.train(data, labels, options);

Rules of thumb: You almost always want to try the linear SVM first and see how that works. You want to play around with different values of C from about 1e-2 to 1e5, as every dataset is different. C=1 is usually a fairly reasonable value. Roughly, C is the cost to the SVM when it mis-classifies one of your training examples. If you increase it, the SVM will try very hard to fit all your data, which may be good if you strongly trust your data. In practice, you usually don't want it too high though. If linear kernel doesn't work very well, try the rbf kernel. You will have to try different values of both C and just as crucially the sigma for the gaussian kernel.

The linear SVM should be much faster than SVM with any other kernel. If you want it even faster but less accurate, you want to play around with options.tol (try increase a bit). You can also try to decrease options.maxiter and especially options.numpasses (decrease a bit). If you use non-linear svm, you can also speed up the svm at test by playing around with options.alphatol (try increase a bit).

If you use linear or rbf kernel (instead of some custom one) you can load and save the svm:

var svm = new svmjs.SVM();
var json = svm.toJSON();
var svm2 = new svmjs.SVM();
svm2.fromJSON(json);

Using in node

To use this library in node.js, install with npm:

npm install svm

And use like so:

var svm = require("svm");
var SVM = new svm.SVM();
SVM.train(data, labels);

Implementation details

The SMO algorithm is very space efficient, so you need not worry about running out of space no matter how large your problem is. However, you do need to worry about runtime efficiency. In practice, there are many heuristics one can use to select the pair of alphas (i,j) to optimize and this uses a rather naive approach. If you have a large and complex problem, you will need to increase maxiter a lot. (or don't use Javascript!)

License

MIT