GitXplorerGitXplorer
v

talkGPT4All

public
152 stars
20 forks
5 issues

Commits

List of commits on branch main.
Unverified
f2ba7d85eb938e8d839a0100987003b23c34db0d

feat: Update upstream GPT4All, use GlowTTS

vvra committed 10 months ago
Unverified
7deb79c4b7697d89b9f73374e9e1b4b6cb37875e

feat:use glow-tts

vvra committed 10 months ago
Unverified
074a64eb9a655a1cf61bdfd8113b424e38b21599

doc: add doc todo

vvra committed a year ago
Unverified
884eced30cc59d0ac471bb98cafea1744beb3853

doc: add build and publish doc

vvra committed a year ago
Unverified
420b5b364a5230f3e64f382515403ce9bacb7b41

bump version 2.1.1

vvra committed a year ago
Unverified
73353e5468669bb50a712e945cb4b2003b8a0368

doc: update installation description

vvra committed a year ago

README

The README file for this repository.

talkGPT4All

A voice chatbot based on GPT4All and talkGPT.

Video demo.

Please check more details in this blog post (in Chinese).

If you are looking for the older version of talkGPT4All, please checkout to dev/v1.0.0 branch.

Installation

Install using pip (Recommend)

talkgpt4all is on PyPI, you can install it using simple one command:

pip install talkgpt4all

Install from source code

Clone the code:

git clone https://github.com/vra/talkGPT4All.git <ROOT>

Install the dependencies and talkGPT4All in a python virtual environment:

cd <ROOT>
python -m venv talkgpt4all
source talkgpt4all/bin/activate
pip install -U pip
pip install -r requirements.txt

Extra dependencies for Linux users

We use pyttsx3 to convert text to voice. Please note that on Linux ,You need to install dependencies:

sudo apt update && sudo apt install -y espeak ffmpeg libespeak1

Usage

Open a terminal and type talkgpt4all to begin:

talkgpt4all

Use different LLMs

You can choose different LLMs using --gpt-model-type <type>, all available choices:

{
"ggml-gpt4all-j-v1.3-groovy"
"ggml-gpt4all-j-v1.2-jazzy"
"ggml-gpt4all-j-v1.1-breezy"
"ggml-gpt4all-j"
"ggml-gpt4all-l13b-snoozy"
"ggml-vicuna-7b-1.1-q4_2"
"ggml-vicuna-13b-1.1-q4_2"
"ggml-wizardLM-7B.q4_2"
}

Use different Whisper models

You can choose whisper model type using --whisper-model-type <type>, all available choices:

{
"tiny.en"
"tiny"
"base.en"
"base"
"small.en"
"small"
"medium.en"
"medium"
"large-v1"
"large-v2"
"large"
}

Tune voice rate

You can tune the voice rate using --voice-rate <rate>, default rate is 165. the larger the speak faster.

e.g.,

talkgpt4all --whisper-model-type large --voice-rate 150

RoadMap

  • [x] Add source building for llama.cpp, with more flexible interface.
  • [x] More LLMs
  • [x] Add support for contextual information during chating.
  • [ ] Test code on Linux,Mac Intel and WSL2.
  • [ ] Add support for Chinese input and output.
  • [ ] Add Documents and Changelog

contributions are welcomed!