GitXplorerGitXplorer
p

speaker-recognition

public
675 stars
276 forks
18 issues

Commits

List of commits on branch master.
Unverified
1d81338169659e9255f823b9bb043b7a1915bc82

fix none in file name (fix #54)

pppwwyyxx committed 7 years ago
Unverified
7b6e24c8205dbb7a8f0e496fabc9bd2bf46507d7

update readme

pppwwyyxx committed 7 years ago
Unverified
2dea69a566dd8c53a1a15be11599e57300889a3e

don't use ubm

pppwwyyxx committed 7 years ago
Unverified
1eb2622abfc79b6c081402d41fff824e9e951a08

Update README.md

pppwwyyxx committed 7 years ago
Unverified
266f816c0612005e3dba30daa256980e9cb969e9

Merge pull request #45 from chanansh/fix_docker_dependencies

pppwwyyxx committed 7 years ago
Unverified
9ebd1500a9a78826a1fb77019aca8645c5894ee6

minor changes

cchanansh committed 7 years ago

README

The README file for this repository.

About

This is a Speaker Recognition system with GUI.

For more details of this project, please see:

Dependencies

The Dockerfile can be used to get started with the project easier.

  • Linux, Python 2
  • scikit-learn, scikits.talkbox, pyssp, PyAudio:
    pip install --user scikit-learn scikits.talkbox pyssp PyAudio
    
  • PyQt4, usually can be installed by your package manager.
  • (Optional)Python bindings for bob:
    • install blitz, openblas, boost, then:
     for p in bob.extension bob.blitz bob.core bob.sp bob.ap; do
     	pip install --user $p
     done
    

Note: We have a MFCC implementation on our own which will be used as a fallback when bob is unavailable. But it's not so efficient as the C implementation in bob.

Algorithms Used

Voice Activity Detection(VAD):

Feature:

Model:

GUI Demo

Our GUI has basic functionality for recording, enrollment, training and testing, plus a visualization of real-time speaker recognition:

graph

You can See our demo video (in Chinese). Note that real-time speaker recognition is extremely hard, because we only use corpus of about 1 second length to identify the speaker. Therefore the system doesn't work very perfect.

The GUI part is quite hacky for demo purpose and is not maintained anymore today. Take it as a reference, but don't expect it to work out of the box. Use command line tools to try the algorithms instead.

Command Line Tools

usage: speaker-recognition.py [-h] -t TASK -i INPUT -m MODEL

Speaker Recognition Command Line Tool

optional arguments:
  -h, --help            show this help message and exit
  -t TASK, --task TASK  Task to do. Either "enroll" or "predict"
  -i INPUT, --input INPUT
                        Input Files(to predict) or Directories(to enroll)
  -m MODEL, --model MODEL
                        Model file to save(in enroll) or use(in predict)

Wav files in each input directory will be labeled as the basename of the directory.
Note that wildcard inputs should be *quoted*, and they will be sent to glob module.

Examples:
    Train:
    ./speaker-recognition.py -t enroll -i "./bob/ ./mary/ ./person*" -m model.out

    Predict:
    ./speaker-recognition.py -t predict -i "./*.wav" -m model.out