GitXplorerGitXplorer
g

italkiCorpus

public
8 stars
3 forks
1 issues

Commits

List of commits on branch master.
Unverified
40d38858a9f4269fa8a3785f127d2472cf0dcaf0

Switched to hf trainer

gghomasHudson committed 3 years ago
Verified
96144581fe2e24407002f65cdb95db9da4af02c0

Merge pull request #26 from kritigupta13/master

gghomasHudson committed 3 years ago
Verified
613145ab4d60cb25f2b5c5067a31b4c7b1a2af52

Changed to get different pages

kkritigupta13 committed 3 years ago
Unverified
c171915824db05abd76c226b8f0cd9686d79af3c

Added lock to multiprocessing

gghomasHudson committed 4 years ago
Unverified
45b59b52efabd11a3d8ef71fb722c6b1c75fc791

Update formatting strings

gghomasHudson committed 4 years ago
Unverified
ec976a5acc71d8e80096c58f78d7d2d721178968

Merge branch 'master' of https://github.com/ghomasHudson/italkiCorpus

gghomasHudson committed 4 years ago

README

The README file for this repository.

italkiCorpus example workflow

Dataset for our work: On the Development of a Large Scale Corpus for Native Language Identification.

Note: The italki website has moved away from the notebooks used in this project. This code probably wont work anymore (at least till updated)

Gathering data

Due to copyright reasons we don't publish the raw data. Instead, tools are provided to recreate NLI corpus from the italki website.

To recreate the exact same dataset as collected in 2017, pass the ID list file:

python3 scrape.py recreate 2017_ids.txt

Collect your own new data using:

python3 scrape.py scrape arabic chinese french german hindi italian japanese korean russian spanish turkish

By default, this will make a new folder italki_data with .txt files named with their document id, as well as a label csv file:

document_id, author_id, L1, english_proficiency
142576, 32162, Turkish, 2
248781, 12987, French, 4
...

A simple benchmark (WIP)

In the benchmark folder there are 2 scripts:

  1. italki/italki.py - Loads the data using the Huggingface Datasets library. You can reuse this for your own models.
  2. train.py - Trains a simple bert model using the dataset.

Feel free to use and adapt these for your own research. To include the huggingface datasets version in your own script, you can write:

>>> import datasets
>>> ds = datasets.load_dataset("./benchmark/italki", data="../italki_data")
>>> print(ds["train"][0])
{"document": "Today I went to...", "native_language": "French", "proficiency": 5, ...}

Citation

If you use this dataset in your work, please cite:

@inproceedings{hudson2018development,
  title={On the Development of a Large Scale Corpus for Native Language Identification},
  author={Hudson, Thomas G and Jaf, Sardar},
  booktitle={Proceedings of the 17th International Workshop on Treebanks and Linguistic Theories (TLT 2018), December 13--14, 2018, Oslo University, Norway},
  number={155},
  pages={115--129},
  year={2018},
  organization={Link{\"o}ping University Electronic Press}
}

Dataset Metadata

The following table is necessary for this dataset to be indexed by search engines such as Google Dataset Search.

property value
name Italki Native Language Identification Dataset
alternateName Italki
url https://github.com/ghomasHudson/italkiCorpus
description Native Language Identification (NLI) is the task of identifying an author’s native language from their writings in a second language. This dataset (italki) consists of large quantities of text from the language learning website italki. The italki website creates a community for language learners to access teaching resources, practice speaking, discuss topics and ask questions in their target language (the English language). We gather free-form ‘Notebook’ documents, which are mainly autobiographical diary entries with connected profiles describing the native language of the author.

This repository contains scripts to download the data along with the ids to recreate the 2017 dataset.

citation https://ep.liu.se/ecp/article.asp?issue=155&article=012