GitXplorerGitXplorer
f

metaseq

public
6452 stars
723 forks
155 issues

Commits

List of commits on branch main.
Verified
f7ffa5fd61cf90f498a36d365c13dd7f1a912ff7

fix: add support for wide characters when building index of dataset files (#728)

mmattmazzola committed a year ago
Verified
c16d21047d975b7a925648f38cba3190a8ef27d6

enable post ckpt callback, support local symlink (#724)

aadampolyak committed a year ago
Verified
08cfa296d9b29494f7ae771c500880a78b908ca4

upgrade flask (#721)

zzycalice committed a year ago
Verified
edefd4a00c24197486a3989abe28ca4eb3881e59

Andy/drop mseq req from reshard (#715)

aandrewPoulton committed a year ago
Verified
2c8fbd99b60ba9440925b1d657b873ab459141da

Fix an issue with the 6.7B path (#712)

committed a year ago
Verified
efe5633cc82d6306d2caa0c910e33d6bda40f532

Repetition Penalties, Factual Nucleus (#306)

kklshuster committed a year ago

README

The README file for this repository.

Metaseq

A codebase for working with Open Pre-trained Transformers, originally forked from fairseq.

Community Integrations

Using OPT with 🤗 Transformers

The OPT 125M--66B models are now available in Hugging Face Transformers. You can access them under the facebook organization on the Hugging Face Hub

Using OPT-175B with Alpa

The OPT 125M--175B models are now supported in the Alpa project, which enables serving OPT-175B with more flexible parallelisms on older generations of GPUs, such as 40GB A100, V100, T4, M60, etc.

Using OPT with Colossal-AI

The OPT models are now supported in the Colossal-AI, which helps users to efficiently and quickly deploy OPT models training and inference, reducing large AI model budgets and scaling down the labor cost of learning and deployment.

Using OPT with CTranslate2

The OPT 125M--66B models can be executed with CTranslate2, which is a fast inference engine for Transformer models. The project integrates the SmoothQuant technique to allow 8-bit quantization of OPT models. See the usage example to get started.

Using OPT with FasterTransformer

The OPT models can be served with FasterTransformer, a highly optimized inference framework written and maintained by NVIDIA. We provide instructions to convert OPT checkpoints into FasterTransformer format and a usage example with some benchmark results.

Using OPT with DeepSpeed

The OPT models can be finetuned using DeepSpeed. See the DeepSpeed-Chat example to get started.

Getting Started in Metaseq

Follow setup instructions here to get started.

Documentation on workflows

Background Info

Support

If you have any questions, bug reports, or feature requests regarding either the codebase or the models released in the projects section, please don't hesitate to post on our Github Issues page.

Please remember to follow our Code of Conduct.

Contributing

We welcome PRs from the community!

You can find information about contributing to metaseq in our Contributing document.

The Team

Metaseq is currently maintained by the CODEOWNERS: Susan Zhang, Naman Goyal, Punit Singh Koura, Moya Chen, Kurt Shuster, David Esiobu, Igor Molybog, Peter Albert, Andrew Poulton, Nikolay Bashlykov, Binh Tang, Uriel Singer, Yuchen Zhang, Armen Aghajanya, Lili Yu, and Adam Polyak.

License

The majority of metaseq is licensed under the MIT license, however portions of the project are available under separate license terms: